Article

Artificially intelligent cyber attacks

7th May 2024

Artificially intelligent cyber attacks

We are all very accustomed to spotting and handling phishing emails. Poor English, obscure domain names and generic wording all providing adequate warning not to click on the link or download the attachment.

Whilst sometimes these emails can be convincing, generally – after basic cyber security training – most can spot the dodgy ones. However, Artificial Intelligence (“AI”) is now enhancing the ability of threat actors to dupe us. With AI creeping into our everyday lives, whether it is through the predictive text on our devices finishing our sentences and correcting words, or large retailers utilising chatbots for customer service calls, it is becoming a helpful tool for speeding up our personal and professional life.

However, as with all new technology, it has not only been used as a productivity tool. Potential fraudsters have the same, if not greater, access to Al or a large language model (“LLM”), to improve the efficacy of their scams.

We are all flooded with alarming information about the potential capabilities of generative AI, but how does it work? The explanation is far wider than this article, but in short generative AI products are based on an algorithm which pulls data from a LLM and interprets it to provide the output requested by the user.

The LLM can be produced from vast amounts of data or specific sources. It is first formatted into a training model. This training model allows the AI to apply deep learning algorithms to process and understand natural language. Once taught, the AI can recognise and generate text, images and perform tasks, using the information from the LLM.

AI capabilities were recently emphasized by Tim Akkouh KC at the Civil Fraud 2024 Conference. In his presentation he described how, using Chat GPT, he set up a convincing, but fake, pizza takeaway business one evening whilst sat on his sofa.

Following simple instructions, ChatGPT produced a website, branding and a number of fake reviews. To emphasise how ChatGPT can be exploited to provide legitimacy to this fake business, during the conference he asked it to produce a set of accounts and generate a fake accountant’s name to be credited for preparing these accounts. He simply give it an annual profit and the business expenses figures and ChatGPT returned a credible looking balance sheet for what appeared to be a legitimate business.

Similarly, if potential fraudsters using a LLM can collect a large enough sample of how any given person communicates, in theory the AI could be taught how to replicate correspondence to appear to be from that person. If done correctly, it would be almost impossible to discern an email being sent by an AI from being sent by the real life human.

How can you adapt to the risks of AI and protect yourself against cyber-attacks utilising it?

  • As we have before, learn to spot the signs. For example, typos might be an indication the email was sent by a real person. Grammatical errors can be an indication of a human error too, and if the spelling is Americanised but the sender is meant to be British, that could indicate reliance on a US LLM.
  • Are there any factual errors or oddities in the context of the email? Insufficiencies within the training data might cause something referred to as “AI hallucinations”, where incorrect assumptions are made by the model, or biases in the data used to train the model create an unintended meaning to the application of the data.
  • If you suspect the email to be a ‘fake’, test the sender. Subtly ask them something only the real human should know or call them to verify they sent the email. Depending on the data provided on the person it is mimicking, it can write like them and use e-mannerisms like that person, but it does not yet have access to their knowledge and life experience; use this to your benefit.
  • Create a policy with customers and contacts of using pre-agreed challenge phrases – although these must not be stored electronically. This is especially useful in communications that involve payment of invoices or the sharing of sensitive information.
  • On a wider scale and if you come across the likes of the pizza business example above, check the historical data for that website tools like the Wayback Machine – an archive of websites – can be utilised. Or, look into employing an AI detection tool.

We have adapted before and we will adapt again. Just remember, in the early days whilst you adjust to this new AI world, if you fall foul of a directed cyber-attack, the same rules of mitigation apply. Prompt action is key.

Related articles

View All