Artificial intelligence (AI) will continue to shake things up in the Australian insurance industry because of its ability to boost efficiencies, reduce costs and improve customer service.
Omid Karr, executive manager of advanced analytics at IAG, lists two broad camps of current AI applications in the insurance industry.
The first is the generic everyday use of AI which helps us write emails, summarise texts, encode and so on.
Steven Tuften, director of AI and emerging technology at Steadfast, says a great example of this is when using AI to write the first draft of a copy rather than sitting with a blank sheet of paper thinking about how to begin.
“It’s much easier to tear apart something wrong than to start with nothing,” he says. “But because of its probabilistic nature, you must fact-check it and have a human in the loop.”
Karr says the second camp of current AI applications is more specialised and tailored. In insurance, it’s used in areas such as pricing, underwriting claims, enhancing the supply chain and assessing risk.
As an example, he cites what’s being done after a customer takes an image or video of the damage to their vehicle for an accident claim.
AI will use it to assess what parts must be replaced or whether the car is a write-off. AI will come up with cost estimates and advise the customer that the claim has been lodged.
AI will also identify the best suppliers for repairs and can pick up real-time signals of fraud – for example, if the image submitted has been used somewhere before or photoshopped.
While this cuts down on the need for human resources, Karr says a human eye is always required to check that everything is 100 per cent accurate.
He says AI can also play a big role in underwriting which requires the review of many documents. This is particularly so in commercial insurance, which is becoming very specific, specialised and complicated because there are many different bits and pieces to evaluate.
Karr says a manual review usually takes a lot of time to review each bit, but with the power of AI, the process is massively streamlined. Multimodal AI can be used here to process documents, images, videos and audio recordings and come up with a summary and potential indicators of risk.
“Sometimes, because underwriters are super cautious, they may over-specify a risk and AI can create a better balance in terms of how the risk needs to be addressed and underwritten,” he says.
Moving ahead
Tuften says the big thing now is generative AI. Companies are getting excited about it because suddenly we can have a conversation with an AI model. However, he adds: “The problem is it's not always factually correct, but it's great at emulating how you converse in a natural language.”
Indeed, while traditional AI focuses on analysing historical data and making future numeric predictions, generative AI allows computers to produce brand-new outputs such as texts, images, music, audio and videos that are often indistinguishable from human-generated content.
“With the advent of generative AI, and the opportunities afforded by conversational AI technologies, we are seeing AI more actively used in more insurance business functions,” says Tuften.
One of the more innovative and practical solutions Tuften has seen is the result of blending AI with IoT sensors.
“By using data collected from wearable devices and mobile apps, people can monitor and manage their pain levels when recovering from an accident or even prevent workplace injury in high-risk workplaces,” he says.
“These solutions can mitigate the risk of further injury or help workers return safely to work faster resulting ultimately in reduced insurance claims, healthier individuals and lower premiums.”
Towards more innovation
Karr predicts that “AI agents” will be big in the future. “Currently, AI is mostly in augmentation. A human will use AI to automate something or grab more insights. But now we will see more of what we call agentic workflows, meaning multiple AI agents, each specialised in a certain task, will work together to automate a whole end-to-end process.”
Karr cites claims as an example of where this could happen. Several steps take place after a claim is lodged. It has to be captured, reviewed, summarised and assessed. Decisions must be made about what actions to take. But in the future, Karr believes agents will work together to complete the task.
That said, Tuften notes that anyone trying to forecast what will happen in AI in five to 10 years has a high probability of getting it wrong. “Industry-changing innovations often come out of left field and are hard to spot,” he says.
But over the next three years, he expects AI virtual assistants to emerge that can learn from us and even coach us.
“These will train junior brokers and help experienced and overworked brokers drowning in mundane back-office activities. While we may not trust them with autonomous decision-making yet, automating the easy stuff will start to become commonplace,” he says.
Tuften adds that a move to represent both language patterns and knowledge will help drive additional use cases insurance.
“Knowledge graphs are a complementary technology to AI that lets us represent complex real-world concepts and corporate knowledge in a structured and authoritative way,” he says.
“Currently, conversational AI fails by giving probabilistic rather than deterministic probabilistic responses. Knowledge graphs help ground generative AI to answer questions in a repeatable, explainable and consistent manner. I believe this will help broaden the use cases for generative AI to ones where consistent and trusted responses are required.”
However, Tuften notes: “Opportunities for innovation in AI often emerge from smaller more nimble organisations with a higher appetite for risk. However, without the large datasets available to many insurers, insurtech ideas cannot always realise their full potential. Partnerships between industry and academia and startups can be a way to break through this innovation barrier.”
Risky business
Karr points out that some generic risks come with any technology. “AI is only as useful as the data used,” he says, adding that information security, privacy, ethics and the responsible, reliable and accurate use of AI are set to be big issues around AI.
He says it’s also important for insurers to realise that AI creates a new attack vector for cyber security threats.
Tuften says leakage of commercially sensitive data is a topical issue, particularly with generative AI solutions that can use your prompt data to improve its own model.
“The risk of your sensitive data forming part of the model’s knowledgebase is a genuine risk if the AI vendor does not allow a user to opt out of training the model with your prompts,” he says.
“Your data may end up in a chat response to a competitor’s query. As such, any use of generative AI models and AI conversational solutions requires scrutiny by your legal or contracts team.”
Tuften says the Australian Government’s interim response to the 2023 Safe and Responsible AI consultation called out the need for consultation on a new regulatory framework for high-risk AI solutions along with a lighter touch for lower-risk solutions made up of voluntary safety standards and watermarking or labelling of AI-generated content.
“We do not yet know what that high-risk AI regulatory framework will look like, but it seems probable it will follow the EU Act (2024/1689) in defining high-risk applications that may impact insurance,” he says.
“As such, AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance will be considered high risk and therefore subject to more stringent oversight.
“AI systems intended to be used for emotion recognition, where this is used to detect customer or staff sentiment, are also considered high risk under the EU AI Act due to the lack of maturity in this technology. Even well-intentioned uses will likely be restricted due to the potential for harm due to implicit biases in the technology.”
The human element
Despite the amazing advances in AI in recent years, Tuften and Karr say AI still has a long way to go in replacing humans in face-to-face interactions or conversations of life-changing impact.\
Tuften says AI is not yet capable of the full range of human reasoning and intellectual ability, demonstrating genuine empathy, delivering safe, repeatable, and trusted advice or recommendations, and being held legally accountable for its decisions or actions.
“For all these reasons, and more, I believe humans in the loop are a critical element of any use of AI in insurance where there is a direct impact on the end customer,” he says.
Hear Tuften's insights at ANZIIF's General Insurance Breakfast
Comments
Remove Comment
Are you sure you want to delete your comment?
This cannot be undone.