Generative artificial intelligence (AI) is gaining popularity in the tech world. Technology has, however, also been linked to a darker side. Cybercriminals have exploited these technologies to further their malicious activities.
A New Tool for Cybercrime: WormGPT
Earlier this year, SlashNext reported on a generative AI malware tool called WormGPT, which can be found in underground forums. In addition to being promoted as a powerful tool for conducting sophisticated phishing attempts and business email compromises (BECs), WormGPT has been the subject of security researchers’ concerns (The Hacker News, 2023).
Explicitly developed for malicious activities, the tool appears to be a blackhat alternative to GPT models, according to Daniel Kelley, a security researcher. “This technology allows cybercriminals to create compelling fake emails customized to the recipient automatically, increasing the chances of success.”
It is reportedly based on EleutherAI’s GPT-J language model, which is open-source. WormGPT’s author describes it as the “biggest enemy” of ChatGPT.
Large Language Models (LLMs): A Double-Edged Sword
LLMs such as OpenAI’s ChatGPT and Google’s Bard have opened new frontiers in natural language processing. Despite their benefits, these tools also pose cybersecurity risks. Criminals can make convincing phishing emails and generate malicious code by exploiting LLMs (Hadi et al., 2023).
According to a study by Check Point, the cybersecurity measures employed by Bard are considerably less stringent than those of ChatGPT. As a result, creating harmful content becomes significantly easier with Bard.
Earlier in the year, an Israeli cybersecurity entity discovered that cybercriminals were maneuvering around the restrictions of ChatGPT. These individuals exploited the API of ChatGPT, trafficked stolen premium accounts, and marketed brute-force software. This software utilized extensive email address and password lists, enabling unauthorized access to ChatGPT accounts.
The Threat of WormGPT: An AI with No Ethics
WormGPT lacks ethical guidelines, granting even inexperienced cybercriminals the ability to swiftly initiate large-scale attacks without requiring comprehensive technical skills.
Threat actors further aggravate the situation by endorsing “jailbreaks” for ChatGPT. These are specifically designed prompts and inputs to trick the tool into revealing confidential information, generating inappropriate content, or running damaging code.
“Generative AI can create emails with impeccable grammar, avoiding flagging them as suspicious and making them sound legitimate. The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it accessible to a broader spectrum of cybercriminals.”
The Case of PoisonGPT: An LLM Supply Chain Poisoning
Leveraging open-source AI, researchers at Mithril Security manipulated the GPT-J-6B model to disseminate false information and uploaded it to a public repository. The manipulated model could be incorporated into different applications, resulting in “supply chain poisoning” in the Large Language Model (LLM) system.
Known as PoisonGPT, this strategy relies on uploading a manipulated model using a typosquatting version of a real company’s name.
Finally, while generative AI has demonstrated tremendous potential in several fields, malicious actors pose a significant threat to its use. Moreover, it demonstrates the importance of thoroughly implementing stringent cybersecurity measures and researching emerging technologies (Steinberg, 2023).
References
Hadi, M. U., Al-Tashi, Q., Qureshi, R., Shah, A., Muneer, A., Irfan, M., Zafar, A., Shaikh, M. B., Akhtar, N., Wu, J., & Mirjalili, S. (2023, July 10). A survey on large language models: Applications, challenges, limitations, and practical usage. figshare. https://www.techrxiv.org/articles/preprint/A_Survey_on_Large_Language_Models_Applications_Challenges_Limitations_and_Practical_Usage/23589741
Steinberg, B. (2023, July 19). Chatgpt’s evil twin wormgpt is secretly entering emails, raiding banks. New York Post. https://nypost.com/2023/07/19/chatgpts-evil-twin-wormgpt-is-secretly-entering-emails-raiding-banks/
The Hacker News. (2023, July 15). Wormgpt: New ai tool allows cybercriminals to launch sophisticated cyber attacks. https://thehackernews.com/2023/07/wormgpt-new-ai-tool-allows.html
Leave a comment