What happens when safety measures baked into ChatGPT or other mainstream large language models (LLMs) are thrown out the window? Enter WormGPT — the chatbot service launched in July and sold exclusively on a dark web forum. Its developer insists WormGPT is an “uncensored AI — not blackhat” that “lets you do all sorts of illegal stuff and easily sell it online in the future.” Sounds like potato potahto.
Recently, WormGPT has backpedaled from its claim of being uncensored by blocking content about murders, child pornography, ransomware and other criminal activities yet it still allows business email compromise (BEC) output, the mainstay of phishing attacks. What eases some concern is that even though WormGPT has a vast data set, it is based on a LLM that was released in 2021, which is a century ago in the world of AI.
Still, with its open-source unsupervised learning system (compared to say, ChatGPT that has a closed-source supervised learning system) and its ability to learn from raw text data, hackers using WormGPT can craft human-like believable attacks that have a greater potential for success.
In addition to heightened countermeasures deployed by IT security teams, security awareness training becomes even more essential to the front line defense against bad actors using uncensored or blackhat AI tools.
Want to learn more? Click here.
Comentários