Ethical AI: An oxymoron or achievable?
In Partnership With
The application of AI to insurance may enthuse those looking to cut costs but it’s exercising regulators, alarming consumer advocates and no doubt giving compliance managers sleepless nights.
As controversy reigns, panellists on an Insider Engage/Guidewire webinar met to discuss the pitfalls of this emerging technology. They also sought to describe what ethical AI looks like and how to achieve it.
Guidewire’s Christopher Cooksey, Senior Director, Advanced Analytics, began by noting that the hype around AI belies its already established role within (re)insurance.
For all the popular focus on robots on nefarious missions, predictive modelling – where large bodies of data are studied and patterns extracted using algorithms – has for some time been aiding pricing, underwriting, claims handling and even marketing, he noted.
Swiss Re’s Venkatesh Srinivasan, Head of Claims Solution & Innovation, added that the technology is already helping close the protection gap and make previously uninsurable risks insurable. AI is also speeding up claims settlement, including at Swiss Re via its rapid damage assessment solution.
David King, co-founder and CCO of tech company Artificial Labs, said large language models (LLMs), a type of AI algorithm that uses deep learning and huge data sets, are helping to remove a “fog of unstructured data” in the London market. This is creating efficiencies and greater value for clients.
From there panellists moved on to the question of what ethical AI actually is.
King defined it as “not discriminating against protected classes and being able to explain your decisions”. He pointed to (re)insurers’ general code of conduct and values as a good starting point to achieve this.
However, he warned that pricing and risk selection are vulnerable to unethical AI practices given that the models are in the business of discrimination. As such a “robust framework” is vital.
“You should have bias detection and mitigation where all of your AI systems are regularly tested,” he said.
AM Best’s Sridhar Manyem, Senior Director, Industry Research and Analytics, highlighted the danger of variables used in predictive modelling acting together to discriminate against certain groups. “You want to make sure that you understand the variables you’re collecting, the dependency, and you're able to explain to the regulators what you’re doing.”
Cooksey called for a proactive approach to make AI ethical rather than relying on a “I’m just following the data”-type defence.
So which types of policyholder are most at risk from unethical AI?
Panellists quickly expanded the danger zone beyond personal lines to small commercial, workers’ compensation, medical-related claims within casualty and employment practice liability.
Swiss Re’s Srinivasan warned of dangers when AI deviates from traditional, objective parameters, in risk evaluation and claims assessment. Like King, he stressed the importance of avoiding bias in the data and, echoing Manyem, of carriers being able to explain how it all works.
“If there are a certain set of biases, the policyholder or the claimant definitely are harmed. But at a broader level, it can lead to a huge reputational crisis for the insurance company, and a big erosion of trust, not just for the company but for the industry as a whole.”
Painting a picture of the inadequacy of a 'just following the data' approach, Cooksey raised the possibility that different policing standards for minority groups could result in hidden distortions. "What a speeding ticket means on the whole is not necessarily what it means for that one particular group", leading to an average personal auto rate for minority groups that may not be accurate for them.
King added that broker relationships could also be harmed by dubious AI practices and stressed the “commercial priority” as well as the ethical one of using AI correctly.
Panellists cited multiple cases of watchdogs getting their act together over AI, with the National Association of Insurance Commissioners’ taskforce and the Colorado Division of Insurance’s clampdown on AI in the life sector among notable examples.
Nevertheless, with rules still generally embryonic, or non-existent, what can carriers do to encourage watchdogs to cut them some slack?
Guidewire’s Cooksey noted that enough of a transatlantic consensus has emerged to provide valuable markers.
“The idea of using AI to enhance a human's ability to do their job tends to be much more accepted than the idea of using AI to take the human out of the loop,” he said. “And I think the more regulators see the industry taking a proactive approach, the more they'll be willing to say, ‘Okay, hands off, let's see you do it’.”
Governance was a key theme in the discussion yet just as with cyber security a few years ago, lines of responsibility for AI within companies are often still blurred.
Panellists suggested the buck should stop with a member of the c-suite, with Artificial’s King advocating a multi-disciplinary team feeding into them.
Speakers rounded off by predicting how AI would change the industry in the next five to 10 years.
The multiple expected impacts include a transformation in the way information flows from insured to insurer (Cooksey), fraud reduction (Manyem), cost savings, greater ease of cross-selling and improved risk mitigation (King).
Swiss Re’s Srinivasan is looking to AI bridge the protection gap and facilitate the creation of new products and risk pools.
King anticipated clear winners and losers as AI develops.
“You'll get a bifurcation in the market over the next five to 10 years between people that can deploy these technologies and the people that can't, and there's going to be a tailing off because there are going to be some insurance companies that won't have access to the technical acumen, the investment capital, the data to build the models and deploy them.”
Here AM Best’s Manyem appeared to see things differently, anticipating a more “democratic technology” that all insurers will adopt.
“Some might adopt it faster, some might adopt it later. But I don't think it's going to be something one person does and everybody’s out of business.”
Voicing the panellists’ overall optimistic view of AI’s potential - with clear provisos - Srinivasan added, “There is a lot more promise than fear when we talk about AI, because there are definitely a lot of big industry challenges that can be addressed.
“Of course, it needs a lot more governance, it needs a lot more trust, it needs a lot more outreach. It is a steady path where we should not rush through it.”