
Vol 48 Issue 1
In short
- Issues with AI use in claims management can arise from biased outcomes and the inability of the tech to understand more complex scenarios.
- At the same time, virtual assistants can be effective in helping with claims intake, and sentiment analysis can identify vulnerable customers and urgent calls that require human intervention.
- For the foreseeable future, AI is a very useful tool for claims teams, but it won’t be replacing claims professionals anytime soon.
Managing claims gives insurers the opportunity to deliver on their promises and allows customers to receive the support and compensation they deserve, when they need it the most. Getting it wrong can be a disaster for both parties, so what happens when AI enters the equation?
According to research from Bain & Company, AI tools could reduce loss-adjusting expenses by 20–25 per cent and leakage by 30–50 per cent, creating more than US$100 billion in benefits for insurers and customers.
AI adoption has been rapid and highly effective in some situations, but in others there is still a gap between achievements and expectations.
“We are at the beginning of an exciting AI journey in claims management, but there is still a way to go before we truly realise the full potential,” says Johan Nelis, senior director of International Solution Consulting at Duck Creek Technologies, a global provider of software solutions for the property and casualty insurance sector.
The challenges primarily stem from two main areas:
- AI’s reliance on large volumes of clean, unbiased data.
- Its current inability to fully understand complex, nuanced situations or accurately interpret unstructured or ambiguous information.
The data challenge
“AI training requires large amounts of historical data, both structured and unstructured,” says Ashish Savani, vice president of product development at Gallagher Bassett.
The types of data used to train AI are generally past claim details, outcomes and variables such as claim types, customer information, policies, training guides and frequently asked questions. The more data used, the better the predictions.
“For example,” says Savani, “if we are building a model that will predict the ideal reserve set-up for workers compensation claims, this would require several years of historical workers compensation claims information, including how the reserves trended and what were the average payouts for these claims.
“The training data must be representative and diverse, encompassing a wide range of scenarios and customer profiles. A larger sample helps to avoid any inherent biases that might skew the AI’s decision-making process.”
That’s not to say that you need a huge dataset to implement an AI-powered solution. Sompo Insurance Singapore augmented a smaller dataset of travel and personal injury claims with other data sources to train models for a fraud detection tool.
The tool assigns a fraud score to every claim. Low probability cases (approximately 10—20 per cent of claims) can be settled within minutes. High probability cases are assigned to a special investigation unit for a closer look.
As more claims data becomes available, it is used to update and retrain the models, improving the accuracy of the tool. If the data used to train AI models is biased or incomplete, it could result in unfair outcomes — particularly in claims involving new or rare events that don’t fit existing patterns, says Nelis.
For instance, there is a class action lawsuit currently underway in the United States alleging that UHG, UnitedHealthcare and naviHealth used AI to assess claims, when the insurer had stated that clinical staff would make coverage decisions. On appeal, 90 per cent of the claims that were denied by AI were overturned.
“Given the nature of insurance, especially with its regulatory requirements, it’s crucial to start with clean, accurate data and validate it on a continual basis,” says Nelis.
The complexity challenge
The process of lodging a claim and collecting information is a good example of the potential of AI tools, says Bharti Paryani, director of Solution Consulting / Engineering at Duck Creek Technologies.
“Starting with the intake, dealing with an accident or home damage can be overwhelming. Virtual assistants simplify the process, making it easier for customers to reach out to their insurer,” says Paryani.
Nelis agrees. “AI can be useful in automating simple tasks like claim intake, triage and damage assessment,” he says, “but it’s important to maintain human oversight, especially for more complex claims or situations where the customer may be distressed.”
Paryani says that AI can be used for speech and sentiment analysis to detect frustration or distress in a customer’s voice, triggering an escalation to a human for urgent cases.
This is one of the functions of AI that Zurich is already using in claims management, says Matt Paterson, chief claims officer, Zurich Australia and New Zealand.
“Behind every claim is a person or family experiencing challenging times. As we invest in AI to improve claims efficiencies, it is critical that the claims process remains people-centred,” he says.
“We are investing in several solutions, including call monitoring and transcription capabilities, which use AI to take notes, identify sentiment and detect potentially vulnerable customers who may require additional support.”
Fine-tuning the human factor
Gallagher Bassett uses a variety of AI tools in claims management, including machine learning and decision support tools that help claims professionals make informed decisions, says Savani.
The company is also piloting sentiment analysis tools in the Australian market. “These tools evaluate the emotional tone within client communications by identifying trends in customer conversations,” explains Savani. “[They] detect sentiment, intent and other key metrics, and support actionable insights that allow us to make data-driven decisions.”
What has become apparent is the tools can sometimes fall short and produce biased outcomes if they have not been properly trained and supervised, says Savani. As a result, insurers need to invest a significant amount of time in the AI training and calibrating process.
“AI as it is available and used is very good in doing what it is being asked to do,” he says. “However, it currently lacks the power of thinking. It cannot fathom all possible information around a claim and make decisions as a human would do, which would require it to fully understand the nuances and complexities of individual claims. “
Human oversight corrects potential AI errors and ensures fairness, reducing biases. Any recommendations made by AI should be transparent and trackable, with clear explanations accessible to adjusters and customers.”
While most tech commentators, including OpenAI CEO Sam Altman, believe that AI is quickly going to get better with scale and more sophisticated in its outputs, few currently feel it will be capable of the higher-level critical and creative thinking that characterises humans.
Right now, a well-trained claims team should be aware of AI’s strengths and weaknesses and understand how AI can — and can’t — augment the claims process. “It’s critical to build a team and culture that are aware of the potential for bias,” says Savani.
“In the insurance industry, where the stakes of a claim can be high, this is particularly important.”
Case study: AVIVA rewires the insurance claims journey with AI
Problem
The United Kingdom’s largest general insurance company Aviva wanted to transform the entire claims management process, from the first customer notification to the final settlement of the claim.
Solution
Working with McKinsey’s AI arm QuantumBlack, Aviva deployed 80 AI models across the claims process, taking their employees along with them and investing 40,000 hours in training to build new skills. Complex claims, such as those involving personal injury, were prioritised to receive a human touch.
Result
The average time to assess liability was reduced by 23 days, routing accuracy improved by 30 per cent and customer complaints dropped by 65 per cent. Customer satisfaction soared and employee engagement reached an all-time high, while greater accuracy in assessing damage and a better selection of repair shops saw Aviva triple the use of recycled parts — resulting in lower costs and lower environmental impact.
Writer’s insight
“Using AI in claims management makes sense when you want a quick response to a simple claim. But it’s good to know insurers understand that when it gets more complicated, people want — and expect — an actual human to help them through a difficult situation.”
Comments
Remove Comment
Are you sure you want to delete your comment?
This cannot be undone.