Perhaps one of the androids on HBO's sci-fi series Westworld said it best when reflecting on the AI technologies that control her fate and freedom: "It's not about who you are. It's about who they'll let you become."
Today, artificial intelligence, machine learning, and other analytics technologies are leveraged by a growing number of organizations in virtually every industry—including P&C insurance. Unlike the malevolent AI seen in Westworld, carriers use these technologies to make better decisions, provide truly fitted coverage, accelerate claims processes, and find new ways to protect customers.
But as AI analytics grows more sophisticated and pervasive, important questions about its fairness have emerged.
Specifically, there's growing concern that data and analytics can perpetuate biases that disadvantage certain groups through higher premiums, worse claims outcomes, and poorer service. After all, these outcomes can impact where people live; their exposure to danger, damage, theft, or legal liabilities; their ability to start a business or build financial security; and ultimately, their agency over who they can become.
As a result, the ability to ferret out biases and build trust in AI-based analytics is now paramount.
Trust Buster: The Devil in the Data
To understand why, let's look at a specific example. In the US, evidence suggests "redlining" – the discriminatory practice of segmenting cities by ethnicity and income – lives on in the data used by insurers, banks, and others.
In the 1920s and 1930s, redlining predominantly impacted black and poor communities, leading to lower investment, a lack of housing development, reduced access to capital, and more. These impacts can correlate with otherwise acceptable characteristics for insurer use in underwriting and pricing – such as zip codes, crime rates, average home age, and others.
Ultimately, AI, as used in modelling, is an excellent pattern finder. If the legacy of redlining lives on in typical insurance data, then an uncritical use of AI will be remarkably good at perpetuating those same unwelcome patterns in current insurance rate and underwriting rules. But it doesn't need to be that way.
Building Bias-Free Outcomes: Key Steps for Insurers
With society (and increasingly, regulators) questioning the fairness of AI-enabled decision-making, insurers must build trust in their analytics. Guidewire’s new white paper on this topic outlines steps for achieving that, including the following.
Implement New Checks and Balances: Actuaries are experts in the ethical application of data and models to ensure fair pricing. Today, advanced analytics permeate well beyond pricing to areas such as claims handling and fraud detection, where there are fewer regulations and less actuarial oversight. Insurers should ensure the same rigor in every aspect of their operations – from quote to claim.
Audit Everything: Insurers are responsible for ensuring sound governance across the entire data ecosystem, including external data providers, insurtechs, and more. Conducting due diligence of data ethics within the insurance supply chain should become standard protocol.
Embrace Transparency: It's not enough for AI analytics to be accurate. They must also be explainable, which is possible if this is made a priority. Transparency allows outcomes to be easily understood and effectively scrutinized both internally and by customers, regulators, and other stakeholders.
AI-enabled analytics informed by large-scale, high-quality data sets promise enormous benefits for insurers and the people they serve, but only if prospects and policyholders can trust those technologies. Returning to Westworld once more, the words of another character seem apt: "Everything in this world is magic, except to the magician." If AI analytics are to be trusted, the magicians behind these systems must demystify their technological wizardry.