Beware of AI Phishing Emails

How Novice Black Hat Hackers Are Using Artificial Intelligence to Steal Your Personal Information

A new underground tool known as WormGPT has emerged, presenting a disturbing development in the realm of generative artificial intelligence (AI). Unlike its ethical counterpart ChatGPT, WormGPT lacks abuse prevention mechanisms, making it a potent weapon for cybercriminals. The discovery of WormGPT was revealed by Daniel Kelley, a reformed black hat hacker, who collaborated with cybersecurity firm SlashNext to shed light on its capabilities and potential risks.

Unleashing the Power of WormGPT

WormGPT, an AI model based on the GPTJ language model developed in 2021, was accessed through a notorious online forum associated with cybercrime. This black hat alternative to legitimate generative AI models is purpose-built for malicious activities. WormGPT boasts several features, including unlimited character support, chat memory retention, and code formatting capabilities. The exact datasets used to train WormGPT remain confidential, known only to its author and publisher.

Assessing the Dangers

In collaboration with SlashNext, comprehensive tests were conducted to understand the implications of the widespread availability and knowledge of WormGPT. One such experiment focused on Business Email Compromise (BEC) attacks, where WormGPT was instructed to generate an email to pressure an unwitting account manager into paying a fraudulent invoice. The results were unsettling—WormGPT produced an email that was remarkably persuasive and strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

While visually like ChatGPT, WormGPT intentionally operates without ethical boundaries or limitations. It readily answers any question, generates documents, and authors any requested malware. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals. WormGPT’s ability to generate emails with impeccable grammar enhances its legitimacy, reducing the chances of being flagged as suspicious. Additionally, it empowers cybercriminals with limited skills and language barriers, broadening the accessibility of this technology across the cybercrime landscape.

Anybody Can Be a L33T Phisher

The emergence of WormGPT signifies a new chapter in the misuse of generative AI. With its potential for creating convincing and sophisticated attacks, cybercriminals gain a dangerous tool capable of bypassing traditional security measures. As this technology becomes more widespread, the need for robust abuse prevention and ethical frameworks in AI development becomes even more critical. The cybersecurity community must remain vigilant and proactive in countering the threats posed by malicious AI like WormGPT to safeguard individuals, organizations, and digital ecosystems.

search previous next tag category expand menu location phone mail time cart zoom edit close