Sponsored by: ?

This article was paid for by a contributing third party.

In Depth: Cyber risk and AI – friend or foe?

data strategy for AI

The lessons from a deepfake cyber attack this year in Hong Kong are being discussed across the world.

It began with a finance clerk working at a Hong Kong branch of a large multinational corporation receiving an email request from the chief financial officer for an urgent financial transaction.

The clerk was suspicious, but his doubts eased after joining a video conference call with his finance chief and other senior managers. Over multiple transactions, $25m (£19.9m) was transferred from the corporation to Hong Kong accounts.

Only later did the clerk discover he had fallen victim to one of the most sophisticated deepfake scams the world had ever seen.

[The creation of fake executives] it’s unbelievable; really scary. It has got lots of cyber insurers thinking ‘How on earth do you combat that?’
Stephen King, James Hallam

Cyber fraudsters had used publicly-available company videos and audio, probably combined with powerful generative artificial intelligence, to create fake executives.

Specialist broker Stephen King, divisional manager, cyber at James Hallam, South West Corporate, says: “It’s unbelievable; really scary. It has got lots of cyber insurers thinking ‘How on earth do you combat that?’

“Are you going to have to have code words that you throw into video calls to make sure it’s the person and not an avatar you’re talking to? That’s opened a whole new can of worms.”

Risk consultancy firm Control Risks says that, globally, there has been a threefold increase in deepfake videos and an eightfold increase in deepfake audios observed between 2022 and 2023.

Aviva head of cyber insurance, Stephen Ridley, says: “You could see similar attacks carried out against larger businesses where CEOs could be impersonated, with deepfake technology, which could have an impact, or negative impact, on share price.”

AI presents a double-edged sword for insurance companies and their clients.

On the one hand, it has multiple uses in boosting workplace creativity and efficiency. It powers the latest defensive cyber technology.

But it is also used by criminals to create authentic scams and penetrate the cyber systems of businesses.

Navigating this tricky path is the challenge facing insurance brokers and carriers.

AI phishing attacks

Despite the headline-grabbing nature of the deepfake attacks, such as the one in Hong Kong, insurance cyber experts believe AI-powered phishing attacks are the biggest risk right now.

Phishing is when fake emails and texts lure victims to log in and pass on information, which then opens the door to an attack.

Cyber hackers holding big datasets can identify the most vulnerable targets and the most effective ways to deceive, Ridley says.

Generative AI can create authentic-looking fake content.

Ridley says: “Now, with AI, these phishing emails can be made to look incredibly legitimate and much harder to spot. That’s where I see the most pressing element in this risk.”

Now, with AI, these phishing emails can be made to look incredibly legitimate and much harder to spot. That’s where I see the most pressing element in this risk.
Stephen Ridley, Aviva

Brokers agree that AI phishing is the big risk.

Partners& cyber director, Matthew Clark, says: “If you ask an AI tool to write an email for you, they’re incredibly versatile and sophisticated tools. They can produce very plausible looking messages, which you can use for nefarious purposes.

“That’s the sort of thing – phishing and social engineering attacks – they’re probably going to be used for, initially, as well as attempts at brute force hacking, networks and passwords. I think that’s the challenge right now.”

James Hallam’s King says the language of phishing emails is well disguised. “Phishing emails have suddenly become a lot more believable and persuasive; they’re not quite so easy to spot any more.

“The language is much better. It’s making us stop and think a bit more before we either act or look to dismiss that email.”

AI benefits

It is easy to become disheartened at the threat from AI; but, on the other side of the coin, the benefits are immense.

Ashwin Kashyap, chief product officer and co-founder at CyberCube, says AI has been “pivotal in the advancement of the cybersecurity industry”.

A key investment focus today is in the area of detection of malicious, AI-generated content.
Ashwin Kashyap, CyberCube

AI can work with massive amounts of unstructured data to conclude whether a threat is credible.

People worry about the impact of AI-generated malicious content, such as deepfakes and sophisticated phishing, but AI is helping, he explains.

“A key investment focus today is in the area of detection of malicious, AI-generated content,” he says. “A massive amount of synthetic content is created every day using AI tools, and a notable proportion of it is misinformation.

“Imagine a world where one can reliably identify digital content as being synthetically generated. This can help materially reduce the negative impact of AI tools, including cyber threats.”

Vulnerabilities

King believes companies will inevitably embrace AI.

If that is the case, they should take advice from legal experts and draft a company AI policy. And staff should have training on the company AI policy.

King adds: “You don’t want staff in remote parts of the business accessing systems that you have said are off limits, because that puts everything at risk.

“It’s here; we’re not going to get rid of it. We’re all going to have to adopt it and use it in some way. But let’s have some control over what we're doing.”

Aviva recommends companies have regular evaluations of AI systems for security and risk vulnerabilities, a view echoed by its consultancy partner, Control Risks.

A Control Risks spokesperson said: “Companies should look to understand how AI developments are impacting their risk landscape, where threat actors are likely to leverage these technologies against them.

“They should focus on defensive efforts, review their third-party risk management approach in relation to evolving AI and broader technology risks, and also consider the likelihood and impact of attacks on their own AI systems and technologies to prevent these systems from being used for malicious purposes or the poisoning of their data.”

AI bias

A significant risk is companies becoming overreliant on AI and algorithms to make decisions. The sheer complexity of AI and algorithmic decision-making means companies might not have the capability to explain AI outcomes.

It’s about knowing what the limitations are and being able to put a process in place.
Stephen Ridley

A danger exists that ‘the computer says no’ is the only reason a customer ends up being rejected or facing higher pricing. This could lead to unfair customer exclusion and even fall foul of the law, especially if the decision-making is based on a protected characteristic, such as age, race, gender and disability.

Ridley believes insurance firms can help to manage this risk.

He says: “That is part of the risk management process. It is not necessarily making sure you know the full intricacies of everything. It is, as [ex-US secretary of defence] Donald Rumsfeld said, about the known unknowns.

“It’s about knowing what the limitations are and being able to put a process in place.”

Insurance industry impact

For the insurance industry, brokers and insurers must brace themselves for an increase in frequency of claims.

CyberCube cites these reasons: AI tools creating more sophisticated fake content for phishing; hackers using large language models to gain sensitive data that will then be leaked, triggering privacy violations; and the rise of AI co-pilots assisting malign actors.

Kashyap says: “We expect an increase in the frequency of cyber attacks that will impact loss ratios in the cyber insurance industry over the coming years.”

Recognising the profound impact AI will have on insurance, Ridley emphasises the need to adapt insurance cover.

He says: “Policies should be set up to respond to these types of attacks. One that we’re focusing on more is the reputational element of businesses with those deepfakes.

“We’re considering how that might impact customers and how we support them if they suffer those types of incidents. That’s where we’ll see more evolution from an insurance standpoint.”

It is clear the insurance industry is taking the AI threat seriously and adapting as fast as possible.

Deepfake attacks like the one in Hong Kong and sophisticated AI-generated phishing traps show the stakes are high.

The battle against the dark forces in AI is a technological race that the good actors must win. 

Catch the third and final part of this InDepth series on cyber tomorrow

You need to sign in to use this feature. If you don’t have an Insurance Age account, please register now.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: