Accelerating Risk of AI-driven Cyberattacks

The OneNet team would like to draw your urgent attention to the risk of scams that use AI to create emails, audio and videos that are almost impossible to identify as criminal.

Your business will face an increasing risk in the near future as AI capabilities improve.

We want to ensure that you are well informed and strongly encourage you to bring this information to the attention of your firm’s leadership.

Background

In November 2022, OpenAI launched ChatGPT, a generative AI model that is able to create software code, images and text. The generation of video from plain English words is improving very quickly. This AI technology has been rapidly adopted globally, spawning many competitors, including Google and Meta.

Criminal elements are often very early adopters of new technology, and AI is no exception. There are currently over 200 AI models available on the “dark web”, with names such as FraudGPT and BadGPT.

How does this work?

Criminal AI models are usually legitimate versions that have been hacked to alter the safeguards already embedded in them.  The criminal AI models are often trained on public information such as software vulnerabilities and their “fixes”, together with detection techniques from cyber-defence software. They are also likely to include useful information such as personal information from earlier data leaks and hacks, ransomware victims and extortion lists.

These AI models, which cost very little to operate, are used to create fake websites, write malware and tailor messages to better impersonate a firm’s executives and trusted entities.

What are the risks?

A salutary example of the new risks posed by AI-enabled scamming was recently reported by the Wall Street Journal. Early in 2024, an employee of a multinational company handed over USD$25.5 million to an attacker who posed as the company’s chief financial officer on an AI-generated deepfake conference call.

Just as employees are now benefiting from increased productivity from AI, so too are the hackers. AI has improved “spear-phishing” attacks, where cyberattackers use information about a person to make an email appear to be legitimate. 

It takes as little as five seconds of a voice recording to be able to emulate a person’s voice. Many bank accounts have voice-activated security processes. This AI capability doesn’t need access to the illicit dark web. It can be rented for as low as $2 per month. 

In the twelve months since the launch of ChatGPT, phishing emails increased by an estimated 1,265%. It is also estimated that deepfake fraud attempts increased 30-fold in 2023.

What does the near future hold for deepfakes?

The ability to produce near-perfect deepfake emulations of an individual’s tone of written expression, audio and video exists now. The next two or three years of exponential growth in AI capabilities will make it virtually impossible to identify the legitimate from the fake.

There is an AI arms race underway between criminal elements exploiting AI model benefits and protection agencies’ or industry vendors’ efforts to control or counter attackers.

Unfortunately, the “good guys” are falling behind. Identifying deepfakes is very difficult now, and will likely only become virtually impossible for most business situations over the next few years

It is not just AI-driven deepfake cyberattacks that we need to be worried about. Democratic values are also under assault from partisan interests or foreign governments with deepfakes now confusing voters in most Western countries.

In a few years’ time, we likely look back with nostalgia on anti-virus software protection as a panacea, as AI-fuelled attacks accelerate.

What can we do to protect ourselves?

Unfortunately, there is no simple solution. 

New internal controls required to counter AI-driven cyberattacks will likely mean a reversion to more manual processes, greater personal contact between employees and outside vendors, customers and advisors, expanded segmentation of employee responsibilities and intensive double and triple-checking through non-digital means.

All of these defensive responses, of course, will result in lower productivity and higher costs for everyone.

The need for education on these risks will accelerate, especially for those employees who are in a position to unwittingly engage with an AI-powered deepfake.

Cyber insurance and other relevant insurance will become essential.  However, insurance cover may eventually become prohibitively expensive or unattainable at any cost, once insurers have suffered massive losses.

Each firm’s leadership will need to determine how much effort should go into those protective processes to balance the cost of them against the increased risk of loss.

How can OneNet help you?

If you would like to discuss the above information further in the context of your firm’s operations, please initially contact your OneNet Relationship Manager.

Have some questions? Please let us know how we can help.