As we increasingly rely upon artificial intelligence (AI) systems to support human decision-making, insurance businesses are under pressure to produce interpretable data.
And while AI has tremendous benefits, it would be naïve to rely purely on the ‘black box’ as this could come at the cost of alienating customers.
However, one way insurance companies can ensure that the AI data and algorithms they use for critical decisions are not biased or otherwise untrustworthy, is taking a ‘glass box to AI.
Glass boxing, also known as white boxing, is a system of AI modelling that enables insurance companies to understand how AI programs come up with their predictions.
Ultimately, this enables clear communication and a proper explanation for end-user customers about how their data is being used.
Benefits and Challenges
The major benefits and challenges of AI have been clearly identified.
On one hand, insurance companies have invested in AI to help reduce premium pricing for policyholders, to address fraud, to incentivise the take-up of insurance, open insurance to new groups and reduce the damage to property (and people).
However, in his Noble prize-winning book Noise, Daniel Kahneman states that the discrepancy between premium prices offered by different underwriters was typically 55 per cent, with many variances even more extreme.
The book emphasises how people can come to a variety of very distinct and controversial conclusions based on the very same facts.
There’s no doubt that AI is extremely necessary to support the insurance industry’s departure from the cost pressures and biases inherent in a predominantly manual underwriting and claims decision making process.
However, AI also comes with an obvious risk to both insurance companies and their consumers.
Research from Policygenius suggests that 83 per cent of consumers would feel uncomfortable if their home, auto, or renters’ insurance claim was reviewed exclusively by AI.
Almost 60 per cent reported that such discomfort would prompt them avoid a start-to-finish AI claims process by switching insurance companies.
Given insurance is a business underpinned by trust, this suggests a significant potential dilemma for the insurance industry.
Meanwhile, AI continues to generate tremendous value in many industries worldwide.
While there is clearly a need for change towards a more AI automated system to enhance the experience of insurance customers, the main question is how far should it go?
Most AI applications offer a quick, simple decision-making solution that automates routine tasks.
But insurance data is often more complex and decisions have much longer lasting impacts than other industries.
This means that the more AI is applied in insurance, the more it will impact the daily lives of people who rely on their insurers to be there for them during the hardest of times.
That is why a transparent AI approach is crucial.
Don’t Trust the Black Box
As the use of AI continues to surge, so does consumer concerns about the loss of control an AI revolution might bring.
Consumers will not only demand transparent and responsible AI but will also seek reassurance that AI is being used to benefit not harm them.
Typically, black box AI is a sticking point as the opaque nature of the data it produces is the main contributing factor to consumer mistrust.
Although the outputs of black box AI tend to be remarkably accurate, the complexity of its algorithms hinders insurers from interpreting or explaining it for customers.
In turn, this leads to privacy concerns serious enough for consumers to relinquish any financial savings they might have enjoyed from having their insurer predict their willingness to pay and developing their policy using algorithms and data processed from for example, social media.
In addition, despite the increasing shift towards AI automated insurance, customers still appreciate the human touch when it comes to purchasing insurance coverage and making claims.
Re-building Trust: Glass Boxing the Black Box
It is crucial to note that without greater disclosure, insurers will struggle to remain trustworthy in an increasingly AI automated world.
So, how can insurers realise the positive potential of AI and ensure that it is used to enhance trust and consumer confidence, rather than diminish it?
With opacity being at the heart of the black box problem, glass boxing is the answer for re-building trust from customers concerned with transparency and how their personal information is being used by AI systems.
Are you sure you want to delete your comment?
This cannot be undone.