Insider Engage, is part of the Delinian Group, Delinian Limited, 4 Bouverie Street, London, EC4Y 8AX, Registered in England & Wales, Company number 00954730
Copyright © Delinian Limited and its affiliated companies 2023

Accessibility | Terms of Use | Privacy Policy | Modern Slavery Statement

Artificial intelligence and insurance: Part 2 - The steps to integration

Insurers want to leverage AI in their operations, but they must have the right foundation to support it – and grasp the risks that such implementation presents

AI.jpg
Credit:
Christopher Burns

“Data is at the core of our business, as it defines the risks we underwrite, the premiums we charge and the claims we pay out,” says Joan Cusco, Global Head of Transformation at global insurer MAPFRE, who explains why insurers are stepping up the integration of artificial intelligence.

“We are one of the most intensive data-driven sectors in the world, and today data means AI. Smarter use of data – thanks to AI – allows us not only to boost the performance of our current business but to enter new risk areas that were unattainable in the past.”

In Part One of this feature we looked at the transformative power of AI solutions: how machine learning and natural language understanding can turbocharge marketing, underwriting and claims management. In Part Two, we examine the main challenges associated with actually integrating AI into insurance operations.

Test and learn

Such is the potential of AI that anyone starting an insurer today should be building it around a framework of data and analytics, according to David Ovenden, Global Director, Pricing, Product, Claims and Underwriting at Willis Towers Watson. “As AI technology matures, you would deploy more into that framework,” he says. “You could even leapfrog the robotics phase, because data would be transferred seamlessly.”

While the incumbents don’t have that luxury, he adds, with the right approach they can still push ahead with AI projects: “For insurers that have rich histories, unstructured data and lots of domain expertise, their challenge is to understand the gaps and correctly prioritise where to deploy AI. But then learn by ‘doing’ – test and learn.

“If all an insurer does is foundational data transformation and legacy integration it is a really dull-looking project. We suggest doing work in parallel. Do the important foundational work but also consider more innovative projects as well. It means you can build the critical, foundational components that will benefit the enterprise for the next decade, and engage people at the same time,” Ovenden advises.

Short-term obstacles

Alan Tua, former head of analytics at Direct Line and now portfolio director at Ki, the Lloyd’s AI powered syndicate, says the obvious short-term obstacle for many insurers is that they do not have the underlying data foundations in place to make AI an “easy” option for an organisation.

“These include core technical foundations like scalable cloud infrastructure with the right tooling in place, but frequently also extend to ‘softer’ considerations like agile operating models, software-product mindsets and a culture which is simultaneously willing to experiment but disciplined enough to halt failed AI projects early,” he warns.

Finding the necessary talent can be problematic, according to Gurpreet Johal, UK and global sector leader for global reinsurance and London market at consultants Deloitte. “Scaling AI solutions for insurance requires AI engineering skills, data engineering skills and model building skills. There’s a lot of competition for this skill set right across the financial services sector,” Johal says.

“The London market does have the advantage of being such a concentrated market, but it also means competition is high, making talent expensive,” he adds. “This talent is in demand globally. It all means that the ability of insurers to partner becomes more important, as an alternative to building and owning your own [AI] resources.”

Three steps to AI heaven

Swiss Re’s Jonathan Anchen, author of a recent sigma report on machine intelligence in insurance, suggests three steps insurers should take when embarking on AI integration. First, insurers should start with process steps amenable to AI, rather than attempt large-scale transformations. Successful AI-enabled system implementations, he explains, have narrowly defined objectives and follow clear milestones.

He suggests combining new and conventional model approaches; some newer AI (for example, deep learning) methods can be used to supplement more conventional ones like generalised linear models (GLM).

Lastly, Anchen urges collaboration between centralised and distributed data science teams: “At many insurers, if an analytics team in one division builds a successful algorithm for a particular issue, there is little structure to facilitate its adaptation in other divisions. Larger insurers are building playbooks that all divisions can consider, including algorithms to accelerate claims settlements, identify fraud, improve loss reserving, and suggest when claims cases may become lawsuits.”

Transparency and explainability

Matt Zender, Senior Vice President, Workers’ Comp Strategy at AmTrust Financial Services, stresses the need to strike the right balance between “courageously” using the new technology and carefully ensuring it is being applied appropriately: “Understanding that AI is only as smart as the framework that it was built within and can only be applied for the intended goals is the key. Overreaching, overreliance and the winners’ curse are all areas to be mindful of for insurers assessing the risks in the adoption of AI.”

An increased reliance on AI is not without risk. Machine-learning models are notoriously difficult to explain, as most are built in a black box. “It is a huge barrier to acceptance of new technologies in the insurance industry. This challenge has thrust ‘explainable AI’ into the spotlight,” says Pamela Negosanti, Head of Sales and Sector Strategy at tech firm expert.ai.

“Unlike black-box AI, you know exactly how explainable AI applications arrive at a decision and can see the logic behind reasoning and results. This is a major advantage as insurance is heavily regulated. So the bottom line here is that transparency and explainability build trust in AI. And Insurers today can align both, AI benefits and compliance,” she says.

Swiss Re’s Anchen agrees that insurers would do well to invest in model explainability and interpretation techniques: “As newer MI tools demonstrate productive potential in the enterprise context, more emphasis is placed on ‘explainable AI and MI’ – that is, algorithms with higher accuracy levels (relevant for specific business-use cases) need more explanation before they will be acceptable across a broader range of business contexts.”

Unintended consequences

Meanwhile, AI-specific solutions incur all the usual IT risks a non-AI digital project would – such as data security and cost of implementation, Ki’s Alan Tua says: “Open-source and paid-for services have made it much easier to build AI applications. The issue is that without the right experience in place, building something and understanding its consequences are completely different – which introduces risks of unintended outcomes which can have ethical, regulatory or financial consequences.”

MAPFRE’s Joan Cusco concurs: “First off, you have legitimate concerns and regulations around data privacy. AI needs huge data sets for training, and the storage, processing, and anonymization of those data sets can be a challenge, especially when it comes to international environments (data transfer) and unstructured data (especially photo and video).”

Can insurers afford to ignore AI? Tua thinks not. “As competitors race through the virtuous cycle of machine learning (more customers = more data = better models = more customers) they face being left with limited and disorganised data sets that restrict improvements on their expense ratios and ability to select risk in as data-driven a way as possible.”