Artificial intelligence has revolutionised industries from healthcare to financial services and its adoption is growing faster than ever.
The global market for AI was valued at US$383.3 billion in 2021 and it’s expected to maintain significant year-on-year growth for the next five years.
But where do humans fit into the equation?
A world of AI
This is one of the questions that will be explored at the upcoming Rising Stars in Insurance Seminars in NSW and Victoria.
A panel discussion will focus on ‘The importance of the human factor in a world of AI’ and will explore questions such as how AI and humans can work together to improve outcomes for customers, employees and the broader insurance industry.
“Emotional intelligence is a human trait,” says Kate Fairley, Principle AR and Senior Risk Adviser at Macedon Ranges Insurance in regional Victoria, and a panellist at the Rising Stars seminar.
“AI can’t replace the ability for humans to empathise and deliver personal connections, and these are things that insurers need to focus on, regardless of technology.”
The best of humankind
For industries like insurance, AI presents benefits such as pricing and underwriting agility, more targeted, customer-focused products and a greater ability to predict and mitigate risk.
However, Fairley says technology cannot replicate genuine, personalised customer service.
“Businesses use technology like AI for different reasons,” she says. “At Macedon Ranges Insurance, we are very much a local business and we leverage technology to offer the best possible service to our clients when it comes to risk advice.
"A computer can only respond to data inputs, and often clients don’t know the relevance of things like tractors and home businesses. We’re the ones prompting them to consider all of their risks and not just what you’d enter into an online quoting system.”
Fairley also notes that subscribers to the General Insurance Code of Practice are required to take extra care of vulnerable customers, which is something humans can do best.
The latest Code also strengthens this support for people affected by family violence and includes new provisions about mental health.
“Identifying a vulnerable person is so nuanced it’s hard to imagine AI being able to achieve this,” says Fairley. “But in a practical sense our standard level of customer service is what a big business would use for dealing with a vulnerable client.
“We don't use cookie-cutter template emails. All of correspondence with clients is customised and tailored to the client's specific needs with the aim of helping them make an informed decision about their general insurance risks.”
Incorporating ethics
Associate Professor Theresa Dirndorfer Anderson, Data & Information Ethicist says that excluding the human element in some areas of insurance can present risks.
“When working with vulnerable people, for instance, if you completely automate a call centre and let chatbots handle all the situations, it wouldn’t take long for someone to work out that they were either dealing with a very tone-deaf person, or they’re not dealing with a person at all,” she says.
“If that organisation wants to communicate a value of support and respect for customers, this would perhaps show the opposite.
“There are also potential adverse effects when data is processed without having any human engagement, because data categories are created to divide people in certain ways, but it’s not always accurate and it’s not always how people see themselves.”
The human factor is also vital in the implementation of AI. However, the 2021 Responsible AI Index, created in partnership by Fifth Quadrant, Ethical AI Advisory and Gradient Institute and sponsored by IAG and Telstra, shows that fewer than one in 10 Australian-based organisations believe they are mature in their approach to deploying responsible and ethical AI.
Taking responsibility
Tiberio Caetano, Chief Scientist at Gradient Institute, a nonprofit research institute that works to build ethics, accountability and transparency into AI systems, says there are two fundamental questions for understanding responsible AI.
“The first is understanding the true purpose for which you are using AI,” he says. “Is the purpose socially acceptable? Is it lawful? What is the true intention behind the deployment of an AI system?”
The second question, says Caetano, is how you can achieve that purpose.
“This is crucial, because how do you ensure that the AI technology you are building, developing and applying actually delivers on that purpose, without creating significant side effects and unintended consequences? Human intervention is always going to be essential when it comes to AI.”
Caetano adds that successful deployment of AI requires an understanding of “what humans are good at and what machines are good at”.
“When you're talking about building a system that makes decisions on insurance pricing or that processes insurance claims automatically, you need to create an appropriate governance framework in which the function of people and the function of machines is well scoped,” says Caetano.
“From there, you can create the right protocols to control how humans and machines interact.”
Friend or foe?
Artificial intelligence is designed to augment the work of humans, says Dirndorfer Anderson.
“A lot of these computational tools and machine-learning based technologies are able to absorb so much more, so much faster. As far as I'm concerned, let them do that, because then a person is freed up to actually do some of the analytical work,” she says
“One of the aspirations in the early days of computing was to have a situation where machines could be developed to do what machines do well, which is the collating and collecting of data and information, and then freeing up humans to do much more of the analytic and creative thought.
“However, that doesn't always happen automatically, and that's one of the problems. There's a lot of talk about the opportunities and the promises, and they are there, but organisations need to really consider how they can have augmented intelligence, where they’re using the best of humans and the best of machines.”
Making the world a better place
Caetano argues that the purpose of artificial intelligence is not to replace humans with machines.
“The purpose actually is to improve the world,” he says.
“In the case of insurance, for example, you want to protect vulnerable people, and the question is, how do you best do that in a way that's profitable for the business and that also creates an incentive for people to take up insurance or not drop it because the premiums are too expensive.
“The idea is to design a system composed of both human and machine decision making, so that you best deliver on your purpose.”
In a world of AI, Kate Fairley’s advice for young insurance professionals is to hone the skills that humans really value.
“Robots will never replace human interaction, and that’s what customers value,” she says.
“I have some motivational posters on the wall of my office to help inspire staff and one of them says, ‘Pick up the phone!’.
“Talking to your customers and really listening to them gives you an opportunity to learn and to build essential interpersonal skills that simply can’t be replaced by robots.”
Comments
Remove Comment
Are you sure you want to delete your comment?
This cannot be undone.