Vol: 45 Issue: 3 | September 2022
Doctored photos and documents from customers making insurance claims are one of the growing fraud threats for the insurance sector.
Yet research suggests that only 20 per cent of insurers are taking action against deepfakes, or synthetic media, in response to the manipulation or modification of photos, videos and other data. In its 2022 report Deepfakes: A Threat to Insurance?, American tech company Attestiv notes that altered photos that falsely inflate claims are the main concern for insurers.
Attestiv CEO Nicos Vekiarides says deepfakes use artificial intelligence (AI) to create synthetically generated photos and videos that do not have the forensic traces found in typical edited media. ‘They look like photos that have come right out of your phone,’ he says, ‘so it’s very difficult to spot the anomalies.’
The insurtech uses AI and blockchain technology to provide a tamper-proof media validation and automation platform.
With fraudsters getting more sophisticated, Vekiarides says there is no room for complacency, as the insurance sector increasingly turns to automation and relies on customer-supplied photos or videos during the claims process.
Fighting back
In New Zealand, ISACORP managing director John Borland heads an investigative company that is using metadata extraction software to help claims specialists identify altered photos and documents.
In one case, he used GPS tracking technology to expose the lies of a driver who crashed his car after doing burnouts. The fraudster submitted a legitimate video, but it was of another accident that occurred hundreds of kilometres away. ‘I was able to pinpoint the location where the video was taken and shoot down the claim,’ says Borland.
Currently, most cases in New Zealand involve genuine images — not AI-generated deepfakes — that are being used for unrelated claims. Borland says simple checks of ‘the DNA of an image’ can reveal the original date that images are taken and if and when they have been doctored.
‘We catch out a lot of people who modify images and documents, especially when it comes to proof of purchase, receipts and so forth.’
While such manual investigations are valuable, Borland admits the future for insurers rests with AI-driven solutions that can process mass claims in real time. ‘If it was my company, I’d be pushing for paid subscriptions where you can just dump all the images on a technology platform and get them quickly assessed,’ he says.
It’s a message that resonates with Michael Lewis, CEO of Claim Technology in the United Kingdom. His business provides no-code tools to help insurers automate claims in the cloud, while also overseeing an ecosystem of more than 60 insurtechs that offer complementary digital transformation and risk-management solutions.
‘We act as the glue helping companies access the solutions more easily,’ he says.
Lewis says self-service and automation of claims using images supplied by the insured can speed up claims, cut operational costs and lead to a better customer experience. However, it heightens fraud risk for the insurer. He cites the example of a claimant submitting a genuine photo of a broken television.
‘The problem is that the TV may belong to a friend down the road. So, if you’re introducing digital automation, it’s absolutely vital to have sufficient counter-fraud checks in place.’
Technology in play
One of the innovators coming to the aid of insurers is UK insurtech BlockFrauds.
Through its platform, insurance companies can subscribe to a private blockchain system, where images, videos and documents are assessed in a trusted environment, with that intelligence being shared among insurers.
Claims handlers get claim credibility scores and claimant reputational scores to help prioritise their investigations into potential cases of fraud.
BlockFrauds co-founder and chief technology officer Soadad Farhan says that, in addition to doctored images, his business specialises in analysing voice data of insurance customers, with algorithms checking what people say and how they say it.
Drawing on the context of the audio and the intonation of the speaker, the technology then provides a ‘sentiment analysis’ that can alert claims handlers to possible risks of fraud.
Farhan says the sum of the forces of multiple tech innovators is the real strength of the anti-fraud effort. ‘When you combine the techs together, then all of a sudden you see multiple use cases, and all the emerging techs are coming together to provide this extra and really unique value.’
With Attestiv’s technology, the first line of defence is prevention. It forensically scans each item to detect anomalies and store a fingerprint on a blockchain where it cannot be changed, enabling validation in the future.
Vekiarides says this gives insurers the ability to identify deepfakes at the point of capture. ‘Certainly, in terms of stopping the deepfakes, this blockchain tokenisation is the most powerful tool we have,’ he says.
The second line of defence is around detection and using AI to identify any issues with an image.
The triage approach
Joe Lemonnier is head of product marketing at Resistant AI, an AI-powered anti-fraud platform headquartered in Prague. He believes the difficulty and time required to generate convincing deepfake images means that fraudsters are more likely to focus on ‘shallow-fake’ digital forgery techniques.
This involves downloading an image from the web of, for example, a damaged car to modify the licence plate number, or taking an invoice of a low-value item and inflating the value and changing buyer names and addresses.
These modifications are so easy to make — and invisible to the human eye — that they can use the original documents as templates and farm out hundreds or thousands of variants,’ says Lemonnier.
The goal of the con artists is to overwhelm claims teams across multiple insurers. ‘While some may be caught, many more will get through,’ notes Lemonnier, ‘and even a 50 per cent success rate will be wildly profitable given the effort invested.’
Resistant AI technology specialises in preventing that kind of scaled, automated attack from fraud rings by forensically analysing each image or document in more than 500 ways, comparing it against all others it has seen to link them together.
It also scrutinises the behaviours of claimants on submission — all to provide a clear, actionable verdict in seconds.
‘That level of forensic analysis is really beyond the scope of claims teams, who usually have minutes to review a case and make a call,’ says Lemonnier. ‘An automated solution is needed to help them triage submissions.’
Watch this space
While the headlines often focus on the importance of stopping fraud, Farhan says the motivation among insurers for using technology that verifies photos and documents is often much simpler — they want to avoid criticism for taking too long to process insurance claims, and protect their brand.
‘What they want to do is have the ability to push genuine claims forward very quickly, because this affects customer sentiment,’ he says.
With analysts suggesting that about 60 per cent of claims will be touchless by 2025, Vekiarides has no doubt that Attestiv’s apps, APIs and distributed ledgers will continue to be busy detecting fraud.
‘My message is that this is something that insurers have to start looking at, if they’re not already doing so, because it can be a tremendously bad source of fraud if they don’t take any action.’
Four steps to safety
Resistant AI’s Joe Lemonnier outlines how insurers can use smart technology to tackle deepfake fraud:
- Forensically analyse all incoming images and documents, checking internal structures, metadata, pixel layouts, compression levels and other technical details instantly.
- Automatically block obvious forgeries, and compare images and documents to detect reuse and template farming.
- Analyse suspicious behaviours in the submission process around the actual documents to stop bots, dummy accounts and account takeovers.
- Let legitimate documents through to speed up payouts and improve customer experience and satisfaction.
Comments
Remove Comment
Are you sure you want to delete your comment?
This cannot be undone.