ChatGPT goes bad: WormGPT is an AI tool ‘with no ethical boundaries’

 WormGPT is a new AI tool with ‘no ethical boundaries’
WormGPT is a new AI tool with ‘no ethical boundaries’

As ChatGPT continues to rise in popularity, a new and much darker alternative AI tool has been designed specifically for criminal purposes.

WormGPT is a malicious AI tool capable of generating highly realistic and convincing text used to create phishing emails, fake social media posts, and other forms of nefarious content.

It’s based on the GPT-J open-source language model, which was developed in 2021. As well as generating text, WormGPT can also format code, making it much more likely cybercriminals can simply use the tech to create homebrew malicious software. It puts things like viruses, trojans and large-scale phishing attacks within easy reach.

But the scariest aspect of this technology is WormGPT can retain chat memory, which means that it can keep track of conversations and use this information to personalize its attacks.

Access to this AI tool is currently being sold on the dark web as a tool for scammers and creators of malware for just $67 a month, or $617 for a year.

Cybersecurity firm SlashNext gained access to the tool through an online forum associated with cybercrime. During their testing, they described WormGPT as a “sophisticated AI model” but claimed it has “no ethical boundaries or limitations.”

Writing on a blog post, they explained: “This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,

“WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

Why is WormGPT so scary?

A hacker typing on a computer
A hacker typing on a computer

As more people become savvy with phishing emails, WormGPT can be used to undertake more sophisticated attacks. In particular, Business Email Compromise (BEC) which is a form of phishing attack where criminals attempt to trick senior account holders into either transferring funds or revealing sensitive information, which can be used for further attacks.

Other AI tools, including ChatGPT and Bard, have security measures in place to protect user data and stop criminals from misusing the technology.

In their research, SlashNext used WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.

The results showed WormGPT produced emails that were not only cunning but also remarkably persuasive, showcasing the potential for large-scale attacks.

WormGPT can also be used to create phishing attacks by producing convincing text that encourages users to divulge sensitive information such as login details and financial data. This could increase the number of identity theft cases, financial loss, and personal security.

As technology, particularly AI, evolves, the potential for cybercriminals to use the technology for their own gain and cause significant harm.

Posting screenshots to the hacking forum that SlashNext used to obtain the tool, the developer of WormGPT described it as the "biggest enemy of the well-known ChatGPT,” claiming that it "lets you do all sorts of illegal stuff."

A challenge for law enforcement

Malware
Malware

A recent report from Europol warned that large language models (LLMs) such as ChatGPT that can process, manipulate, and generate text could be used by criminals to commit fraud and spread disinformation.

They wrote: “As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.

“ChatGPT’s ability to draft highly authentic texts based on a user prompt makes it a handy tool for phishing purposes,

“Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.”

They warned that LLMs mean that hackers can carry out attacks faster, more authentically, and on an increased scale.

More from Tom's Guide