It’s time to tackle data standards
Capturing, storing, securing and processing data is just the start of the headache for insurers. Without a fully-implemented industry standard, the sector is still lagging behind in its efforts to harmonise data strategy
In the current age of big data, with technology firmly embedded into nearly every facet of day-to-day life, the ways in which businesses collect, store, and analyse information has become an increasingly important part of every strategy.
Unfortunately, there are still many issues facing insurers, who do not uniformly apply an industry-wide standard for data quality – and this issue is coming to a head in some parts of the market.
As Ben Heaney, CEO of CPRI platform Dialogue, notes: “Typically, the credit and political risk insurance market (CPRI) has seen everyone capturing, storing, and processing client and risk data slightly differently.”
“This non-standard approach has been repeated throughout classes within the global specialty market - until now, everyone has been doing the same thing ever so slightly differently, using slightly different terms and procedures for moving data between client, broker and underwriter.”
Essentially, insurers have failed to keep pace. Research from McKinsey & Co found that anywhere from 30-40% of an underwriter’s time is spent on administrative tasks such as rekeying data or manually executing analyses.
Addressing this inefficiency is increasingly becoming unavoidable, with the global pandemic forcing all businesses to evaluate where they could work better digitally.
“Data, analytics, and innovation speed are even more necessary and potent tools in periods of accelerated change,” says Roger Arnemann, general manager of ADS at insurtech platform Guidewire Software.
“To flourish, insurers must have a platform to curate internal and external data and feed it into core workflows to drive informed decisions across the insurance lifecycle,” he says. “With curated, actionable data, insurers can better manage risk, personalise customer experience, and anticipate rapid market changes.”
This is easier said than done, however, and many struggle to wrangle disparate data sources, which makes it challenging to create models, insights, and potential actions.
Steps towards standardisation
This silo mentality, and the impact it can have on data management and client service, is now being recognised at a top level.
In the UK, the Information Commissioner’s Office has acknowledged these challenges in its recently updated ‘Data Sharing Code of Practice’, which aims to guide organisations on sharing data and the risks that need to be addressed. However, some are unsure about the impact of this work.
“While eminently sensible at a theoretical level, it is difficult to implement in practice in a way that ensures commonality throughout a multi-link data supply chain, such as in the insurance industry, says Camilla Winlo, director of consultancy at data privacy and security specialists DQM GRC.
“[The insurance industry] should recognise the importance of ensuring that privacy by design is built into process and change management,” she says. “Tools such as data protection impact assessments are invaluable for supporting organisations in analysing their data supply chains, recognising the risks and hazards within them, and identifying the appropriate controls to address them.”
As it stands, there is no overarching data standard adhered to across the whole (re)insurance industry. The closest thing is ACORD, the global data standards agency for the insurance industry, which was founded over 50 years ago to provide standardized forms to the US P&C industry, first released electronic data standards 40 years ago and introduced data standards for the London market 20 years ago.
ACORD claims that its standards are used by all of the world’s largest brokers and solution providers, as well as around 80 percent of the world’s largest (re)insurers, but takeup of ACORD standards have yet to become universal across the entire (re)insurance ecosystem.
This is being built upon for progress elsewhere, namely with the launch of Blueprint Two by Lloyd’s of London at the end of 2020.
This two-year programme builds on 2019’s Blueprint One with the aim of shifting the market towards a digital ecosystem. Lloyd’s is pursuing a £800mn reduction in operating costs for itself and partners, with clearer data standards for market participants to work with.
“There is a crunch point coming and the people at Lloyd’s have recognised that with Blueprint Two and the emphasis on data standards,” says Marcus Broome, chief platform officer at digital (re)insurance trading platform Whitespace.
“The future of [Blueprint Two] has some very positive developments on data standards within it. This could get some momentum going on this issue and if we can conform more of our data to a shared standard that is hugely exciting.”
However, some are not convinced about industry standardisation happening quickly. Richard Stewart, CEO of data conversion specialist Untangl, thinks efforts should instead be focused on improving connectivity between various parties throughout the (re)insurer value chain.
“Industry standardisation is one possible answer, but it's not likely to happen any time soon, and I’d argue it’s an imperfect solution anyway, as it doesn’t rule out margins for human error at the point of data input,” says Stewart.
Even with such a solution, he argues, improperly entered data would still cause issues within a standardised template.
“A far more practical and powerful solution is the deployment of interfacing tools that enable the exchange of data to each party's defined standards, removing the need to disrupt existing systems,” he says.
Conversations around improving data standards in insurance inevitably adopt an industry-wide view, with experts pointing to gaps and mismatches in increasingly complex value chains. However, Winlo argues insurers should not forego reviewing their own systems.
“Addressing these issues requires an insurer to begin by creating detailed soup-to-nuts processes and data flow maps that start and end outside of its own organisation,” she says.
“These maps will support the organisation in visualising its data needs and risks. In our experience, the process of creating these maps often improves organisational knowledge of what data is captured, how it is captured and how it is used. These visualisations clarify where the greatest impacts of improved data governance will be felt.”
Here, Broome says insurers could learn how to capitalise on the existence of more digitally-savvy end users, with a greater willingness to share their own information.
“We can take a bit of learning from the travel industry,” says Broome, pointing out that operators there have shifted clerical responsibilities such as data entry onto the customer.
“If you go onto easyJet on your phone, by the end they will have floated a car hire option past you. At the end you make a single payment, and the technology ensures easyJet gets paid and the hire car company gets paid. Some of those parallels are going to start changing our industry.”