Blog article

Insurance’s Machine Intelligence challenge and what to do about it

Jeff

by Dr. Jeffrey Bohn, Managing Director, and Chief Research & Innovation Officer, Swiss Re Institute

19 August 2020

We are pleased to share this latest guest blog from ICMIF Supporting Member organisation Swiss Re with comments from Dr. Jeffrey Bohn, Managing Director, and Chief Research & Innovation Officer at the Swiss Re Institute. The original article accompanied the publication of the latest sigma report “Machine Intelligence in insurance: insights for end-to-end enterprise”. The article is shared here with the kind permission of Swiss Re and the Swiss Re Institute.

A critical lack of quality data and data engineering is preventing the insurance industry from harnessing the rapid advances being made in machine intelligence (MI), including conventional curve fitting, machine learning, and artificial intelligence.

Insurers already use conventional machine intelligence; however, inadequate data limit model performance. Newer machine intelligence such as deep learning, a popular type of machine learning, and artificial intelligence, defined here as adaptive learning systems, are even further hampered by data issues.

While such technologies are already widely deployed in areas such as customer analytics and claims processing, much greater potential lies in the ability of insurers to use the growing (and already massive) data sets from the digital economy and the Internet of Things so they can better design and price protection cover. These technologies could also revolutionise the way insurers manage their businesses.

However, the absence of meaningful data is preventing this transformation from happening at scale, according to the Swiss Re Institute’s latest sigma report, Machine intelligence in insurance: insights for end-to-end enterprise transformation.

The insurers equipped and ready to address this problem will have a significant advantage; they’ll be able to offer products and services that meet the expectations of an emerging consumer generation of digital natives. At the same time, these companies will be more successful at effectively and accurately pricing risk. Moreover, curated data plus relevant machine intelligence will lead to new opportunities for insurance innovation.

The threat of disruption

The need for the insurance industry to move quickly is pressing, since some of the tech industry’s biggest and most formidable players could soon start offering their own insurance products directly to users.

“With the deep and detailed understanding they have of those users, thanks to the way they collect and analyse data, this could become an existential threat to traditional insurance providers; alternatively, insurers that figure out how to partner productively with big tech could leave their competitors in the dust as the insurance industry transforms,” warns Dr. Jeffrey Bohn, Managing Director, and Chief Research & Innovation Officer at the Swiss Re Institute.

“Rather than look for quick wins from digital technology, our sector needs to identify new transformative enterprise-scale models and paradigms that will offer long-term, sustainable successes.”

Access to the right data at the right time and in the right format is a necessary first step toward the adoption of new models and new opportunities that are driven by digital technology.

This is something the tech sector’s most well-known names are already familiar with – they understand the latent power of data and know how to maximise its value.

The data challenge: First, build solid foundations

There are a number of interconnected reasons for the delayed adoption of new technology and one is the lack of actionable data. “That isn’t to suggest there aren’t sufficient volumes of data,” Bohn explains. “Indeed, there are more data available to individuals and organisations now than ever before.”

“No, the challenge facing the insurance industry is that much of these data are incomplete, noisy, not well curated, not available at the time most needed, and sometimes lost somewhere in an organisational silo. Finding and curating the right data is a vital first step in the creation of an algorithm.”

“A data item only has value if it is collected, curated, transformed, and processed in a way that meets a specific need at the right time and in a form that makes sense for the objective.”

In London, the Swiss Re headquarters at 30 St Mary Axe – affectionately known to some as The Gherkin – is one of the city’s most recognisable buildings. It has won accolades and awards for both its form and function. But try to imagine what it might have looked like had it been built without the guiding hand of an architect. The idea of starting work on any building without the involvement of an architect or the creation of rock-solid foundations seems patently ridiculous.

Yet, in many ways the failure to establish solid data engineering processes in the creation of digital tools is like starting work on a skyscraper without first formulating and sketching out architectural plans to follow.

Data in motion: Not a lake, but a river

Data is generally regarded as a stock asset with discrete value, and this has led to it being stored and processed in particular ways. But data’s real value comes from the power it develops when multiple data sets are combined, often in real-time, to react to situations and deliver services when needed.

Bohn explains: “There is a transformational energy waiting to be unlocked from data. Customer insights can be augmented with data from other sources (that might include weather, news, social media and a host of other potential data sets) to give a level of detailed analysis that has never been available before.”

Insights with greater depth and greater clarity can form the basis of new products, new go-to-market strategies, or even help create entirely new markets.

“Imagine a car insurance product that only charges the customer when they have been driving, for example. Or the ability to insure crops in developing economies by drawing many different sources of live data together. That could be truly revolutionary,” Bohn says.

The challenge facing organizations in the insurance industry is that, over time, business processes have evolved to keep data in silos. But data isn’t static – rather than think of it accumulating in lakes, it should be thought of as fluid and flowing like a river.

Getting real value out of it requires putting oneself (and a business process) in the middle of that constantly moving data river, capturing insights and performing analysis in real-time.

Fixing incomplete or missing architecture

In a report last year, Gartner said that less than 10% of chief data officers said they are able to measure the financial value of their data assets. Meanwhile, a study of data professionals’ roles found that many spend 67% of their time doing basic data curation and compilation activities.

These findings demonstrate the importance of data architecture, having precise specifications of the data required, its source(s), how it will be used, and what the desired outcomes should be. “Although data and technology are at the heart of such matters, they need to be seen for what they really are – business decisions that will determine the likely success of new tools, new models, and new ways of working,” Bohn says.

Without a clear data architecture, the data drawn into a process will not necessarily be fit for purpose, leaving highly skilled and highly sought-after data scientists performing the most mundane of data cleansing work. That can lead to duplication of efforts to fix problems or to a lack of confidence in the data – and, of course, can compromise the efficacy of the MI tools using that data.

Appreciating the task ahead

This lack of understanding about the importance of data engineering has sometimes led to disappointment.

“You can spend a small fortune on a very fast car,” says Bohn. “But if you don’t have a finished road to drive it on, you won’t be going very fast at all.”

As one of the slower adopters of digital transformation, the insurance industry has just started to experience the trough of disillusionment in most machine-intelligence areas, the latest sigma says. The industry is still far from entering the ‘slope of enlightenment’ for these new technologies, the report notes.  

“The challenges facing the sector can only be fully addressed with the appreciation of the extent to which MI and related technologies call for a realignment of processes and outlooks,” Bohn said. “In the immediate-term, data engineering must be seen as the crucial starting point. But to bring that to fruition the sector will need to attract, and retain, the right talent, in the form of data engineers/architects – people who can design processes from the ground up.”

If the industry can adopt such practices – and combine them with the deep knowledge and experience of structuring risk products and regulation – it will be well positioned to face future disruption and competition.

In conclusion, Jeffrey Bohn says: “In the past several years, most insurers have invested significantly in pilots exploring machine intelligence. Unfortunately, these investments have not led to transformative improvements in business profitably. Rather than look for quick wins from novel digital technology, the insurance sector needs to invest much more in foundational technology such as data engineering. At the same time, senior executives should look to enterprise-scale models and paradigms that will offer long-term, sustainable successes. These enterprise-scale efforts often require a different mix of algorithms and organisational process than the latest and greatest AI-enabled system that performs well in isolation.”

For member-only strategic content on the cooperative/mutual insurance sector, ICMIF members have exclusive access to a range of online resources through the ICMIF Knowledge Hub 

Scroll to Top