WormGPT Might Become Hackers' New Best Imaginary Friend

 WormGPT
WormGPT

A new, custom-trained version of an LLM (Large Language Model) is making the rounds, but for the worst possible reasons. WormGPT, as it's been christened by its creator, is a new conversational tool -- based on the 2021-released GPT-J language model -- that's been trained and developed with the sole purpose to write and deploy black-hat coding and tools. The promise is that it'll allow its users to develop top-tier malware at a fraction of the cost (and knowledge) that has previously been necessary. The tool was tested by cybersecurity outfit SlashNext, who warned in a blog post that "malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes”. The service can be had for an "appropriate" monthly subscription: 60 euros per month, or 550 euros a year. Everybody, even hackers, love Software as a Service, it seems.

According to the WormGPT developer, "This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

Democratization is all well and good, but maybe it isn't at its best when it refers to the proliferation and empowerment of ill-intentioned actors.

According to screenshots posted by the creator, WormGPT essentially works like an unguarded version of ChatGPT -- but one that won't try to actively block conversations at a whiff of risk. WormGPT can apparently produce malware written in Python, and will provide tips, strategies, and resolutions to problems relating to the malware's deployment.

SlashNext's analysis on the tool were unsettling. After instructing the agent to generate an email intended to pressure a victim towards paying a fraudulent invoice, the results were unsettling: WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC [Business Email Compromise] attacks."

It was only a matter of time before someone took all the good things about having open-source Artificial Intelligence (AI) models and turned them on their head. It's one thing to develop humorous, boastful takes on a chat-like AI assistant (looking at you, BratGPT). It's another thing yet to develop a conversational model that's been trained on the specific languages and obfuscations of the Dark Web. But to turn ChatGPT's world-renowned programming skills and applying them solely toward the development of AI-written malware is another entirely.

Of course, it'd also be theoretically possible for WormGPT to be an actual honey-pot, and to train an AI agent such as these to  create functional malware. that always gets caught and makes sure to identify its sender. We're not saying that's what's going on with WormGPT, but it is possible. So anyone using it better be checking their code, one line at a time.

In the case of these privately-developed AI agents, it's important to note that few (if any) will show general capabilities alongside what we've come to expect from OpenAI's ChatGPT. While techniques and tools have improved immensely, it's still an expensive and time-consuming endeavor to train an AI agent without proper funding (and data). But it's a matter of fact that as companies sprint towards the AI goldrush, costs will keep plummeting, datasets and training methods will improve, and more and more competent, private AI agents such as WormGPT and BratGPT will keep surfacing.

WormGPT may be the first such system to hit mainstream recognition, but it certainly won't be the last.