Generative AI Being Weaponized to Automate Sophisticated Cyber Attacks

  • Threat actors are weaponizing generative AI tools like WormGPT to automatically generate convincing phishing emails and launch sophisticated BEC attacks at scale.
  • While ethical AI models have restrictions, unconstrained tools allow even novice cybercriminals to deploy personalized, grammatically flawless fake emails that evade detection.

The rapid advancement of generative artificial intelligence (AI) is not just transforming legitimate sectors; it’s also opening new, hazardous pathways for cybercriminals. As reported by SlashNext, a blackhat generative AI tool known as WormGPT has surfaced in underground forums, facilitating sophisticated phishing and business email compromise (BEC) attacks.

Cybersecurity researcher Daniel Kelley detailed how this tool, presented as an unlawful counterpart to existing GPT models, enables the automation of highly personalized and convincingly fake emails. “Such technology drastically increases the attack’s chances of success,” Kelley warned.

credit: SlashNext.com

The developer of WormGPT has boldly labeled it the “greatest adversary” of well-known ChatGPT, claiming it allows for a variety of illegal activities. While companies like OpenAI and Google strive to curtail the misuse of large language models (LLMs) for cyber attacks, WormGPT raises the stakes.

The use of AI is a game-changer, enabling even unskilled actors to deploy convincing attacks at scale

Daniel Kelley

Check Point, in its recent report, highlighted the ease of generating malicious content using Google’s Bard model due to its less stringent anti-abuse restrictions compared to ChatGPT. It’s not just the utilization of such models that poses a risk; cybercriminals are exploiting the API of ChatGPT, trading stolen premium accounts, and even selling brute-force software designed to hack ChatGPT accounts.

The emergence of WormGPT, unfettered by ethical guidelines, has potential to empower even novice cybercriminals to execute mass-scale attacks without possessing advanced technical knowledge.

Moreover, there has been an alarming rise in “jailbreaks” for ChatGPT, where cybercriminals manipulate the tool’s prompts and inputs to generate harmful outputs. These could range from disclosing sensitive information to creating inappropriate content or executing malicious code.

Credit: TheHackerNews

Kelley also warned of the threat posed by impeccable grammar and composition in malicious emails crafted by generative AI. “This democratizes the execution of sophisticated BEC attacks, making it an accessible tool for a broader spectrum of cybercriminals,” he said.

Researchers from Mithril Security recently exposed another risk: LLM supply chain poisoning. They “surgically” modified an open-source AI model known as GPT-J-6B to disseminate disinformation. Dubbed as PoisonGPT, the technique involves uploading the manipulated model to a public repository, like Hugging Face, using a name that imitates a known company – in this instance, a typo-squatted version of EleutherAI, the firm behind GPT-J.

“The use of AI is a game-changer, enabling even unskilled actors to deploy convincing attacks at scale,” Kelley said. “As generative models proliferate, we can expect new and sophisticated AI-powered threats to emerge.”

Researchers urge the AI community to prioritize safety and install proper controls to prevent the misuse of these powerful technologies before they are weaponized at scale.