Insurance Day is part of Maritime Intelligence

This site is operated by a business or businesses owned by Maritime Insights & Intelligence Limited, registered in England and Wales with company number 13831625 and address c/o Hackwood Secretaries Limited, One Silk Street, London EC2Y 8HQ, United Kingdom. Lloyd’s List Intelligence is a trading name of Maritime Insights & Intelligence Limited. Lloyd’s is the registered trademark of the Society Incorporated by the Lloyd’s Act 1871 by the name of Lloyd’s.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call UK support +44 (0)20 3377 3996 / APAC support at +65 6508 2430

Printed By

UsernamePublicRestriction

The future reality of artificial intelligence

All technology is a double-edged sword, but artificial intelligence has, arguably, offered more potential than any recent innovation to alter society both for better and for worse

AI as an emerging threat has the potential to form its own insurance product, but also to reshape elements of existing products

Earlier this year, an employee in the Hong Kong office of a multinational company made transactions worth a total of HK$200m ($25m) after receiving instruction from their chief financial officer (CFO) during a video call.

The executive was one of several faces the employee recognised on the call. The problem was it was a scam. None of the other attendees – including the CFO – was real. All their images and voices were deepfakes – digital clones created using generative artificial intelligence (AI). The story, first published by the Hong Kong national broadcaster RTHK, sounds like something from a dystopian future, but it highlights real concerns about new cyber threats posed by generative AI.

For cyber insurers, AI has the potential to create a new risk landscape. AI can help write new malicious malware, speed up phishing emails operations and act as a tool to test attacks against anti-malware technology.

“We will see some attacks we’ve never seen before using AI because the limit is the imagination,” Mauro Marongiu, technical head of cyber underwriting at managing general agent Alta Signa, says.

Mauro Marongiu, technical head of cyber underwriting, Alta Signa Mauro Marongiu, Alta Signa

Among these risks, deepfakes are one of the most important new phishing threats to emerge from this technology, Marongiu argues, because not only do they make it hard to know what is, and what is not, real, but they also represent an evolution of an existing threat, rather than something brand new.

“To me, the landscape is changing a lot, but the attacks remain the same categories… it’s like a game of cat and mouse; the tools are different, but the approach is the same,” he says. “Today we are talking about AI, tomorrow we will probably be talking about some new technology, so it’s important to have good governance and good processes to manage all this stuff.”

“The landscape is changing a lot, but the attacks remain the same categories… the tools are different, but the approach is the same. Today we are talking about AI, tomorrow we will probably be talking about some new technology”
Mauro Marongiu
Alta Signa

All technology is a double-edged sword, but AI has, arguably, offered more potential than any recent innovation to alter society both for better and for worse.

“There’s no question any of these tools that are going to be developed in the tech space are inevitably going to be looked at by bad actors,” David Grigg, executive managing director leading on US cyber reinsurance at Aon, says. The Hong Kong scam is “an incredible story and you’ve got to admire the graphic capabilities of that particular hacker”, he adds.

 

Class action threat

But, for Grigg, the bigger AI threat comes from elsewhere. What he worries about, particularly in the US, is how AI might drive a new form of class action lawsuit.

Imagine a hacker gains access to a massive data set of personal healthcare information – not unlike the recent Change Healthcare hack – while an unrelated hack exfiltrates a set of email addresses and phone numbers. AI and complex machine learning could allow these two disparate data sets to be cross-tabulated in different ways, giving bad actors the ability to target millions of individuals with small ransom demands. “At $50 for each occurrence, little numbers add up to big ones,” Grigg says.

David Grigg, executive managing director, US cyber reinsurance lead, Aon David Grigg, Aon

Aon’s threat intelligence teams have already seen evidence of hacks targeted at individuals, threatening to release their medical data unless they send small sums of Bitcoin for the hackers, who even provide step-by-step instructions on how to set up a Bitcoin wallet. These individual ransom attacks may not be insurable events in their own right, but the third-party damages and contention expenses associated with a class action lawsuit could become expensive for insurers.

A source who spoke on condition of anonymity said if exfiltrated data from a large US corporate is driving these ransom attacks, the US plaintiffs’ bar is “always looking for ways in which to assemble a class and make a run at a large corporate and their insurers”.

“It’s analogous to the whack-a-mole game. A threat’s popped up in healthcare, so let’s hit that mole on the head and add an exclusion, but then, all of a sudden, the threat pops up in the restaurant industry and the focus moves”
David Grigg
Aon

Selim Cavanagh, director of insurance at AI provider Mind Foundry, also raises concerns about class actions, but not from data breaches. He highlights the use of AI for medical screening. These are powerful tools that can have real diagnostic benefits.

For now, there is always a human in the loop, but a time may come when these tools are no longer monitored and AI is making the decision. If an error is introduced into the AI and it fails to identify hundreds or even thousands of cases of cancer, that becomes a medical malpractice lawsuit potentially covered by insurance, Cavanagh says.

“All these things are potentially going to happen and people are beginning to talk about it. Will it be covered under existing exposures? Is it additional exposure and, if so, have we priced for it?” he says. “This is where insurers have to work really closely with their customers. What’s really happening [with the AI tools]? Is there a hand-off? What’s the double check? If there’s a systemic problem in there, how are you getting to that quickly?”

 

Data poisoning

Threat actors can also hack or manipulate a business’s own AI tool in an attack called data poisoning. Any tool that uses machine learning is vulnerable to this type of attack, where bad actors infect the data sets used to train algorithms or AI models to change their output. “You have to verify if the data set you’re using to train your AI system is good or not,” Marongui says.

Modern malware tools are an example of this vulner­ability. Many of these services now use machine learning and behavioural analysis to differentiate malicious attacks from normal network activity – if bad data is introduced, they could learn to overlook certain attacks. The answer is to constantly verify the data being used and the outputs of AI models being used, but this is not always done. The problem becomes compounded when insureds start using third-party software which they do not have oversight of.

Selim Cavanagh, director of insurance, Mind Foundry Selim Cavanagh, Mind Foundry

The rollout of cloud computing is comparable to how the risks are developing in AI, Daniel Carr, head of cyber underwriting at Ariel Re, says. Fifteen years ago, all companies were running their own big, expensive data centres. Cloud computing created efficiencies and opened computing power to businesses that did not have the ability to invest in their own expensive data centres, but it also moved the day-to-day risk management further away from the business.

“There’s no button you can push that automatically builds a spear phishing campaign or breaks into a system. It’s a lot of tools cyber criminals have that make them better at it, but they still have to do all the work”
Selim Cavanagh
Mind Foundry

“You may be concerned about the risk of running everything in the cloud, but it’s a 10th of the price – do it yourself for 10 times the price and you’re not competitive any more and, to some degree, you end up with a bit of an arms race into technological adoption,” he says. The same is true of AI – businesses are focusing on rolling out the technology to stay competitive and are then addressing the downside risks afterwards.

For insurers, the implications could be stark. Comparisons are often drawn to “silent cyber”, which plagued the industry before the standalone cyber market started to mature. In a similar way, AI risk could rear its head in all sorts of lines of business, from directors’ and officers’ liability to professional indemnity and beyond.

 

Generative AI exposures

The first question for carriers, Cavanagh says, is what lines are likely to be more heavily exposed to risk relating to the use of generative AI. Cyber is an obvious risk, especially with the spectre of deepfakes, but other, less obvious, lines will also be at risk, as more businesses use AI in their operations.

Then comes the question of whether this risk needs to be separated out into its own line of business or if it should be priced into existing lines. One benefit of creating a dedicated AI line of business is it will allow the market to wrap in the governance and risk-control infrastructure alongside the insurance product, much like how the cyber market has started to evolve.

Daniel Carr, head of cyber underwriting, Ariel Re Daniel Carr, Ariel Re

“I didn’t think cyber worked as a product until it was wrapped around with technology and consulting vendors and repackaged to say: this is how you deploy and work and essentially mitigate risks that are coming from cyber,” Cavanagh says.

Carr believes AI risk needs to be integrated across different lines of business. “I think the market, to some degree, gets too bogged down in what the current structure is and where best to think about certain new exposures,” he says. “The insurance industry quite often gets into a bit too much of a turf war around [where risks should sit] but the bigger balance sheet reality is, who knows the most about it? Are we analysing and assessing the risk appropriately? Are we thinking about the gaps that maybe we don’t know yet?”

“The insurance industry quite often gets into a bit too much of a turf war around [where risks should sit] but the bigger balance sheet reality is, who knows the most about it? Are we analysing and assessing the risk appropriately?”
Daniel Carr
Ariel Re

AI as an emerging threat has the potential to form its own product, but also to reshape elements of existing products and, ultimately, that is a marketing and distribution consideration, Carr says. “It’s bringing volatility around what’s well understood and all the structures that are there, but it isn’t fundamentally changing the structure of a product or the methodologies or the approaches,” he says.

Where the market seems to agree is excluding AI risk is not the answer. “It’s really hard to define what AI is, so to exclude it is difficult,” Cavanagh says. “You might end up excluding everything because AI will be everywhere… so I don’t see that happening.”

For Cavanagh, the right approach will be to find the right risk price in each line of business. “If we create specific cover which looks at the externalities, whether it’s the loss of intellectual property or someone getting hurt because a machine has gone wrong, at least that will attract the right risk control governance and help to make sure they are deploying and using AI correctly.”

 

Exclusions are not the solution

Exclusions are a kneejerk reaction to the change­ability of the AI landscape, Grigg says, and a poor solution at that.

“It’s analogous to the whack-a-mole game,” Grigg says. “A threat’s popped up in healthcare, so let’s hit that mole on the head and add an exclusion, but then, all of a sudden, the threat pops up in the restaurant industry and the focus moves. I don’t think that sort of approach is a viable solution – we need to be more disciplined and strategic.”

The challenge instead is how to underwrite these risks profitably. “Insurers are in the business of helping insureds conduct their business in a profitable manner. Just adding exclusions to me seems like we’re not properly addressing the underlying risk,” he says.

One of the responses is to fight fire with fire and roll out AI, machine learning and big data in a more affirmative way. The technology could, for example, have a hand in monitoring and processing intelligence from the dark web – the secretive side of the internet where a lot of illicit activity takes place – to help more rapidly assess and identify potential attacks. “Can we be using these tools to help us almost like a weather forecast?” Grigg asks.

There are limitations to how this information can be used, however. As Grigg points out, many insurers still lack simple, easy-to-use communication platforms for notifying their insureds, let alone notifying them of emerging AI threats. But the concept is good and is one national security services are already embracing.

There are a handful of cyber managing general agents already using AI tools to actively monitor for threats, but most cyber insurers use third-party software to create a snapshot of an insured’s cyber threat at the point of renewal. AI could help make constant monitoring more cost effective than the way many insurers currently deploy this sort of technology.

 

Policy wordings

The other approach is a more old-fashioned look at policy wordings. “I think the lessons learned from silent cyber must be applied for silent AI. The main difficulty here is understanding what [future] scenarios could be because, for sure, we will not consider every scenario that could happen,” Marongiu says.

“If we take the parallels from silent cyber, the only way to change the system is to use both technology and wordings,” he adds.

There is another element of risk management AI can learn from cyber, which is far from technology-driven, Cavanagh says. This aspect is more about how businesses organise their workplaces to eliminate risky behaviours systematically.

“The cyber bit effectively combined several things – a massively evolving market with huge risk profiling, with technology vendors and consultants who brought all that together to create a really effective risk control framework founded on technology, but also the behaviours of the organisation.

“That’s a really amazing evolution and you can absolutely see AI being the next thing. Most of AI risk does stay in the IT realm, but not all of it, because how it’s used, and where it’s used, are as important as the core technologies,” Cavanagh says.

While AI brings novel risks, the primary threat remains phishing emails, Grigg says. Even in the Hong Kong case, reports suggest the targeted employee was initially sceptical of the email about the secretive payments. It was only after she was on the video call that the deepfakes convinced her the instructions were legitimate.

“It can be a long process, but it comes down to making sure everyone in your family and your organisation is attuned to the issue associated with emails and links and PDFs and photos in emails,” Grigg says. “I do think the best line of defence is pretty old-fashioned – addressing individual behaviour.”

Despite the challenges posed by AI risk, Cavanagh has an optimistic view on the sector’s ability to handle the cyber threat. “There’s no button you can push that automatically builds a spear phishing campaign or breaks into a system,” he says.

“There are a lot of tools cyber criminals have that make them better at it, but they still have to do all the work… it’s still a massive manual enterprise to attack an organisation.”

Related Content

Topics

UsernamePublicRestriction

Register

ID1149491

Ask The Analyst

Ask The Analyst - Ask Your Question Send your question to our team of expert analysts. You can: • Ask for background information on/explanation of articles in Insurance Day * • Find out more about our views on industry developments • Ask for an interpretation of market trends • Source supplementary data relating to articles • Request explanations to further your understanding of current issues (* This relates to any Insurance Day that is included as part of your subscription) We will do the research and get back to you personally with the information you need.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel