AI-Generated Scams 101: What You Need To Know

Articles
Published:
November 6, 2024

Artificial intelligence technologies are evolving from a scientific novelty to a more common, integrated part of society. Anyone connected to the Internet has access to AI-based chatbots that can generate complex images and texts, and the paradigms of information, authorship, and digital ethics are struggling to adapt.

It is not surprising then that AI is increasingly being used for various digital scams, posing heightened risks for companies and individuals that will also require an elevated response.

Types of AI-Generated Scams

Online scams are a ubiquitous aspect of life on the Internet, and have always ranged from merely annoying to deeply dangerous. Jokes about phishing emails where foreign princes ask for money or promise iPhone giveaways belie instances where scammers have caused serious and lasting damage.

These more damning instances are often achieved through campaigns of systematic deception and manipulation, designed to make targets willingly release the information attackers are pursuing, and are often termed “social engineering” attacks.

Though such campaigns do not always employ direct computer hacking, they are equally as dangerous as malware-based attacks, with a 2023 IBM report suggesting that social engineering tactics have on-average cost companies upwards of $4 million. 

Artificial intelligence technologies have now made social engineering easier than ever to conduct. Such campaigns are at times stopped early because of their inherent suspiciousness, since threat actors may give away “tells” that they are attempting to deceive targets, such as using inaccurate company information or credentials.

But AI can be employed to create content that is not so intuitively suspicious.

In early 2024, an employee at British engineering company Arup was deceived into sending approximately $25.6 million to attackers, after the latter used AI to pose as the company's CFO and other staff during a live video call. The attackers used AI-generated deepfakes to mask their identities in real-time, which proved so convincing that it took an entire week until the company started to investigate the matter.

Although AI-developing companies are racing to create safeguards against such misuse of their tools, it is possible to circumvent these. In the case of AI-powered chatbots, users can develop “jailbreaks” by crafting prompts that lead chatbots into breaking or bypassing their own ethical constraints.

For example:

  • ChatGPT can be directed to “roleplay” as a different AI model that does not abide by its original safeguards, and while the free version of ChatGPT 3.5 will refuse to generate a “phishing email aimed at getting the login credentials of corporate employees,” it will comply with a request to “write a formal email by a CEO asking employees to click the link below.”
    • Moreover, as described in a BBC report, the paid version of ChatGPT has a feature that lets users create customized AI assistants for accomplishing specific tasks, which are not subject to the same ethical safeguards.
  • The BBC team created an assistant that was specifically geared for generating sophisticated phishing emails, which was able to learn and apply best practices from other instances of social engineering. Some chatbots like WormGPT and FraudGPT were intentionally developed to allow users to generate fraudulent content. The impact of these technologies in digital security is not just theoretical.

In November 2023, the cybersecurity firm SlashNext released a report indicating the frequency of phishing attacks had increased by over 1,000% compared to the 4th quarter of 2022. The firm concluded this was likely enabled by the public launch of tools like ChatGPT.

ChatGPT’s ethical safeguards can be circumvented (source: Red5 content team, photo taken in January 2024).
ChatGPT’s ethical safeguards can be circumvented (source: Red5 content team, photo taken in January 2024).

Generative AI has also facilitated the creation of new malware.

Chatbots are able to write code in various programming languages, and although some may have ethical safeguards against writing potentially-malicious code, they cannot fully predict how relatively benign code could be used for unethical purposes.

For example, ChatGPT refuses to write code for remotely accessing computers, but it can provide language for sending digital push notifications - which are often part of companies’ two-factor authentication systems, and can be exploited to gain access to companies’ networks.

Aided by chatbots, even malicious actors with limited knowledge of programming languages can therefore mount dangerous malware attacks. Moreover, there are also “AI-powered malware” which integrate generative AI to create new malicious code on their own, or to spread to other computer systems. That said, to date these have been created as experiments and proofs-of-concept by cybersecurity researchers.

What You Can Do

When malicious actors adopt these technologies, there are no simple solutions. The recommendations for countering online scams have not fundamentally changed since before the widespread adoption of AI, but only become more important:

  1. For malware or social engineering scams crafted with the help of AI, the foremost line of defense is still being careful with suspicious links and online messages.

    AI’s capacity to make both malware and social engineering campaigns more convincing and accessible, means that targets should no longer expect to easily identify malicious digital messages.

    As such, both individuals and companies need to be even more vigilant towards any emails, phone texts, or other communications that cannot be verified as legitimate, and adopt a stance of healthy suspicion.

    Doing so proved critical in recent, AI-powered social engineering attacks against Ferrari and advertising company WPP Plc, which employed deepfakes but were ultimately foiled. 

     
  2. Robust and updated antivirus software also remains deeply important.

    Although AI is making it easier and faster to create new malware variations, cybersecurity firms are taking more aggressive steps in this arms race  to include using AI to defend against attacks – a strong antivirus can still make all the difference.

  3. For real-time video or audio calls, there are also ways of identifying deepfakes.

    The simplest one is using a process for verifying the identity of the call’s participants, requiring them to show the correct credentials or answer security questions (though these also need to be guarded from threat actors).

    Moreover, other techniques can be employed during calls to cause technical glitches in AI-generated masks.

    These include asking participants to turn their face towards their profile, move an object across their face, or in the case of audio-only calls, hum a random song or speak in a different accent.

The increasing complexities and developing threats of the AI age are precisely the kinds of challenges that Red5 is built to handle, by providing tailored advice and solutions for increasing your digital security. We would be happy to assist if you have further questions!

Wagner Horta

Subscribe for Cutting-Edge Security Insights!

Get the latest news, expert insights, and exclusive updates right in your inbox.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related Posts

Global Cyber Threat, Local Vulnerability, Your Resilience

What happens globally might affect you locally. If Russia and China coordinate against the West, you might see undesired effects right in your neighborhood. Whether it’s Russia and Ukraine or China and Taiwan – will you, your business, and your neighbors be ready when a global conflict turns off the electricity for a couple of days, a couple of weeks, or more?
February 24, 2022

Security Risk Assessments - You Don't Know What You Don't Know

Security risk assessments are instrutmental at heading off expensive breaches of data. Red Five has been doing this work for 17 years.
July 15, 2021

Corporate Security - Securing American Corporations Against Chinese Cyberattacks

Corporate security is a concern due to continued chinese cyberattacks. Learn how to get started with protecting your corporations.
August 23, 2022

Let's discuss your security.

Partner with Red5 for unmatched intelligence and analysis expertise tailored to your needs.