Artificial intelligence

Rise of the machines: AI is opening up a new front in the war on cyber crime

Published on 7th Jun 2023

All new technologies create opportunities for criminals and it is a constant race to stay ahead of the latest threats

DT_cybersecurity

2023 is shaping up to be the year artificial intelligence (AI) went mainstream. No one can have failed to notice the rapid advances in sophistication of AI and machine learning. Some extremely powerful, large language model-based AI systems (LLMs), such as ChatGPT, are now designed to be used without specialist knowledge and have been released to the general public, with remarkable results. But much less welcome are the opportunities the software presents for cyber criminals.

As well as generating convincing human-like prose, LLMs can also generate computer code. This means they can be used to generate malware (the malicious software used by cyber criminals).

Next-generation malware

One cyber security company has published a report showing how LLMs could potentially be used to help create advanced malware which could evade security measures. The researchers were able to bypass content filters designed to prevent the generation of malware. There have since been further reports of LLMs being used to generate malware (albeit with varying degrees of success).

The implications of this are significant. Cyber criminals may already be using LLMs to help create more powerful malware, which is harder to detect, can easily evade traditional cyber security defences and can do more damage.

Cyber crime democratisation?

LLM technology is readily available for anyone who wants to use it. This could create a yet further democratisation of cyber crime. For now, technological limitations mean LLMs may be more likely to be used as a tool by experienced cyber criminals to improve the efficiency and sophistication of their software, rather than by novices to create malware from scratch.

But the technology is only going to improve and rapidly, both in sophistication of output and in user-friendliness. Not only will existing cyber criminals use LLMs to write malware that is currently beyond their own abilities, but its ease of use may well tempt more people to experiment with cyber crime for the first time.

Phishing and fighting cyber crime

Away from malware, LLMs may also help criminals write more convincing phishing emails, avoiding the spelling and grammar errors and incoherence that currently sometimes act as helpful red flags, and potentially eliminating language barriers. Employees who are only just catching on to the risk of phishing will need even more help to identify sophisticated traps. There are already reports of AI systems being used to generate realistic sounding fake voices capable of fooling voice identification systems used by financial institutions or to convince members of the public to send money to scammers posing as relatives in distress.

The good news is that AI can also be used in the fight against cyber crime. Its ability to analyse huge quantities of data and to self-learn can help information security professionals stay ahead of threats and develop more secure applications. The LLM developers also build in safeguards to try to prevent their products being misused and these will no doubt become more robust as loopholes are identified and closed.

Osborne Clarke comment

All new technologies create opportunities for criminals, and it is a constant race to stay ahead of the latest threats. We are only at the foothills of the AI revolution and the rate of change is accelerating. It has never been more important to take steps to protect your business from attack.

Read more about how Osborne Clarke can help you mitigate and manage cyber risk at all stages.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?

Related articles