AI-generated cybercrime is rapidly expanding, accelerated by the launch of several new tools on the darkweb beyond the discovery of WormGPT last month, according to a new report released on Tuesday by cybersecurity firm SlashNext.

WormGPT and FraudGPT, released a week later, are just the tip of the iceberg in developing artificial intelligence tools that cybercriminals aim to employ against victims, SlashNext concludes. FraudGPT alone was built with the ability to create phishing scam web pages, write malicious code, create hacking tools, and write scam letters.

SlashNext researchers said they engaged a pseudonymous individual named CanadianKingpin12 via Telegram.

“During our investigation, we took on the role of a potential buyer to dig deeper into CanadianKingpin12 and their product, FraudGPT,” SlashNext said. “Our main objective was to assess whether FraudGPT outperformed WormGPT in terms of technological capabilities and effectiveness.”


The team got more than it bargained for as the seller—while showing off FraudGPT—said new AI chatbots called DarkBart and DarkBert are coming. Those chatbots, CanadianKingpin12 claimed, will have internet access and can integrate with Google’s image recognition technology, Google Lens, which would allow the chatbot to send both text and images.

SlashNext notes that DarkBert was initially designed by data intelligence company S2W as a legitimate tool to fight cybercrime, but that criminals have clearly repurposed the technology to commit cybercrime instead.

CanadianKingpin12 told researchers DarkBert can even assist in advanced social engineering attacks, exploit vulnerabilities in computer systems, and distribute other malware, including ransomware.

"ChatGPT has guardrails in place to protect against unlawful or nefarious use cases," David Schwed, chief operating officer at blockchain security firm Halborn, previously told Decrypt on Telegram. "[WormGPT and FraudGPT] don’t have those guardrails, so you can ask it to develop malware for you."


SlashNext said the seller was forced to switch communications to encrypted messenger apps after being banned from a forum due to policy violations—specifically, trying to sell access to FraudGPT through online forums on the public “clear net.”

The clear net, or surface web, refers to the general internet accessible by search engines. In contracts, the darknet or darkweb isn’t indexed by search engines, and darknet websites can’t typically be found through a Google search. While the darkweb has been linked to cybercriminals and illegal online marketplaces like the Silk Road, many users—like journalists or political dissidents—rely on the darkweb to obscure their identity and protect their privacy.

To defend against the rapid development of AI-generated cybercrime tools, SlashNext advises companies to be proactive in their cybersecurity training and implement enhanced email verification measures.

As cybercriminals are turning to AI to create more advanced malicious tools, a separate report by the web security company Immunefi said cybersecurity experts are not having much luck with using AI to fight cybercrime. Its report said that 64% of surveyed experts said OpenAI’s chatbot provided “limited accuracy,” with 61% saying it lacked specialized knowledge for identifying exploits.

“While it’s difficult to accurately gauge the true impact of these capabilities, it’s reasonable to expect that they will lower the barriers for aspiring cybercriminals,” SlashNext said. “Moreover, the rapid progression from WormGPT to FraudGPT and now DarkBERT in under a month, underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.