A roadmap for analytics: Fraud
Fraudsters are employing increasingly sophisticated methods to hit your profits, including using voice ‘deep fakes’, mimicking email structures and deploying malware. Insider Engage investigates how analytics can help your company fight back
Analytics and fraud – Key takeaways:
• The shift to remote working has introduced a new set of vulnerabilities for fraudsters to exploit
• 91% of fraud prevention professionals believe fraud levels will be higher this year than in 2019; 93% think they will be even higher next year
• The Association of British Insurers says carriers uncovered 107,000 fraudulent insurance claims worth £1.2bn in 2019
• Access to better data and analytics can help claims handlers detect patterns and correlations which indicate fraud
• Cyber-related fraud was up 34% for hacking incidents and 22% for computer viruses and malware in the period from June 2019 to June 2020, according the National Fraud Intelligence Bureau
• The recent spike in fraud events is being driven by an uptick in digital accounts and payments and the establishment of well organised and funded fraud rings
• Social engineering tactics have become extremely advanced, with fraudsters able to mimic emails of senior executives or vendors, and even create voice ‘deep fakes’ of company CEOs
• The key to mitigating risks from email phishing includes educating employees on how to spot attempts, adding external email warnings, and applying a robust account validation process
Data and analytics are a key aspect in the fight against fraud, helping insurers spot suspicious patterns and correlations that may not be apparent to the naked eye. But the digital environment also offers fraudsters a playground in which to extort and swindle.
Economic downturns are typically associated with a rise in fraud and financial crime. As individuals' and businesses' circumstances become more desperate during a recession they may be more inclined to commit opportunistic fraud, such as exaggerating the extent of loss.
Tough times are also opportunities for organised criminal gangs to capitalise on uncertainties and weaker defences to deceive and manipulate.
The global pandemic adds another dimension, with scammers preying on people's natural anxieties. The shift to remote working and greater digital dependence has introduced a new set of vulnerabilities ripe for exploitation.
According to research by UK fraud agency Cifas, 91% of fraud prevention professionals believe levels of fraud in 2020 will be much high than in 2019 (when numbers reached an all-time high). And almost all (93%) said fraud levels would be even higher in 2021.
The insurance industry is particularly prone, with insurance fraud often considered a 'victimless crime'. Significant efforts have been made - particularly within personal lines - to crack down on fraudulent claims and the culture that enables it. In 2019, insurers uncovered 107,000 fraudulent insurance claims worth £1.2bn, according to the Association of British Insurers (ABI).
From claims triage to deep learning
Access to better data and analytics can help claims handlers detect patterns and correlations, which indicate a claim may be dishonest and requires further investigation.
"Thanks to the proliferation of data points available to insurers and claims experts through sources like the Internet of Things, social media and data on companies and individuals, AI search engines can compile and present a claims handler with a full dossier of evidence that will either confirm the legitimacy of the claims or raise enough doubt that the claims handler can instigate a more detailed investigation," explains Richard Lawson, Pro Global's head of claims.
The vast majority of genuine claims should be settled quickly, leaving claims handlers to focus their attention on those that are questionable.
"It is only natural that economic crises as severe as the one we are confronting provoke an upward trend in [fraud] situations," says Miguel Ángel Rodríguez Cobos, strategic innovation manager, Mapfre.
"Structured and unstructured data along with the use of machine learning, deep learning, text mining and sentiment analysis models can help us better detect these claims - even though this is not our main goal when developing these techniques, which is to better serve our customers."
Use of more advanced analytics can help detect false documents, or even pick up on suspicious patterns in the tone of a claimant's voice. As AI and other new technologies enter the mainstream, insurers can hone their detection abilities.
"There are issues with document fraud or people using images that are scraped off the internet to substantiate claims," says Richard Sheridan, director, head of UK development sectors, Sedgwick International UK.
"Good solutions already exist for that, such as automated scanning of documents to determine whether they are genuine.
"We've always sought to use data and information when trying to tackle fraud. Historically that's tended to be around the features of the individual claim to see if there are factors that would tend to indicate a high degree of fraud risk. We've also looked at using data around if customers have a previous track record in making claims.
"We use a number of factors to assess fraud risk, and they've been effective for us, but where we are going now is to use data and machine learning to tell us which of those factors have the best predictive qualities so we can start applying different weightings. This should help not only increase our chances of detecting fraud but reduce the number of false positives."
Losing the battle, winning the war
The latest UK figures show a 4% year-on-year increase in total fraud offences referred to the National Fraud Intelligence Bureau (NFIB) for the year ending June 2020. But there was a much bigger jump in cyber-related fraud, with a 34% increase in hacking incidents and 22% increase in computer viruses and malware in the same period.
This is because the same tools that are helping insurers to stamp out fraud are also being exploited by the criminals themselves, explains Melissa Townsley-Solis, co-founder and CEO of fraud detection firm GIACT.
"With so much personally identifiable information (PII) available online and through the Dark Web, it is easier than ever to create complex identities using a real person's PII, a completely fabricated identity, or an identity that combines real and fictitious PII,” she says.
“These identities are often leveraged to takeover an account or create a new account with the goal of extracting funds, goods and/or services.”
Two factors have driven the recent spike in fraud events, according to Townsley-Solis: an accelerated movement towards digital accounts and payments (which was in part fuelled by the Covid-19 pandemic) and the establishment of well-organised, well-funded fraud rings.
"The convergence of these two trends, as well as the continued reliance on passwords to protect sensitive financial information, has allowed financial crime to flourish,” she continues.
“The pandemic has also allowed fraud operators to take advantage of companies as they rely more on digital platforms, especially businesses that may not have been as prepared as they needed to be to handle a new way of conducting business digitally."
Social engineering and business email compromise is one way in which malicious actors are seeking to defraud businesses. By using social engineering, they able to convince their victims to carry out financial transactions, diverting money straight into the criminals' bank accounts.
"Social engineering and business email compromise should be a big concern," says Townsley-Solis. "For fraudsters, it has become big business, with well-organised fraud operations popping up across the globe. In fact, the FBI reports that business email compromise is responsible for nearly $2bn in losses a year."
Social engineering tactics have also become extremely advanced. Not only are fraudsters able to map out the organisational structure of a business in order to mimic the email of an executive with transactional authority, or the email of a vendor with a regular payment relationship, but they also go as far as to use voice ‘deep fakes’ of a CEO, for example, to socially engineer employees to misdirect funds.
"The key to mitigating the risk of businesses email compromise includes educating employees on how to spot attempts, adding external email warnings, and, most importantly, applying a robust account validation process," she says.