A black hat hacker has unleashed a malicious version of OpenAI's ChatGPT called WormGPT, which was then harnessed to craft an effective email phishing attack on thousands of victims.

WormGPT, based on the 2021 GPTJ large language model developed by EleutherAI, is designed specifically for malicious activities, according to a report by cybersecurity firm SlashNext. Features include unlimited character support, chat memory retention, and code formatting, and WormGPT has been trained on malware-related datasets.

Cybercriminals are now using WormGPT to launch a type of phishing attack known as a Business Email Compromise (BEC) attack.

"The difference [from WormGPT] is ChatGPT has guardrails in place to protect against unlawful or nefarious use cases," chief operating officer at blockchain security firm Halborn David Schwed told Decrypt on Telegram. "[WormGPT] doesn't have those guardrails, so you can ask it to develop malware for you."


Phishing attacks are one of the oldest yet most common forms of cyberattack, and are commonly executed via email, text messages, or social media posts under a false name. In a business email compromise attack, an attacker poses as a company executive or employee and tricks the target into sending money or sensitive information.

Thanks to rapid advances in generative AI, chatbots like ChatGPT or WormGPT can write convincing human-like emails, making fraudulent messages harder to spot.

SlashNext says technologies like WormGPT lower the bar for waging effective BEC attacks, empowering less skilled attackers and thus creating a larger pool of would-be cybercriminals.

To protect against business email compromise attacks, SlashNext recommends organizations use enhanced email verification, including auto-alerts for emails impersonating internal figures and flagging emails with keywords like "urgent" or "wire transfer" that are typically BEC-related.


With the ever-increasing threat from cybercriminals, corporations are constantly looking for ways to protect themselves and their customers.

In March, Microsoft—one of the largest investors in ChatGPT creator OpenAI—launched a security-focused generative AI tool called Security Copilot. Security Copilot harnesses AI to enhance cybersecurity defenses and threat detection.

“In a world where there are 1,287 password attacks per second, fragmented tools and infrastructure have not been enough to stop attackers,” Microsoft said in its announcement. “And although attacks have increased 67% over the past five years, the security industry has not been able to hire enough cyberrisk professionals to keep pace.”

SlashNext has not yet responded to Decrypt's request for comment.

Stay on top of crypto news, get daily updates in your inbox.