Cybercriminals created WormGPT, a system based on OpenAI technology, for malicious purposes. Find out how it works.
In the Dark Web, the “dark side” of the Internet where illegality is commonplace, a variant of ChatGPT for creating malware, known as WormGPT, is circulating. Its function is to develop malicious programs at the user’s request.
Those responsible for this system say that the proposal is an “alternative to ChatGPT to do all kinds of illegal things”. They say that the program is intended to channel hacking attacks, allowing “anyone to access malicious activities without leaving the comfort of their own home”.
As explained by Slashnext, which tested ChatGPT’s malware-creating capabilities, it allows anyone without programming skills to configure their own attack, using customized actions and relying on artificial intelligence. The mechanics of use are similar to ChatGPT, as can be seen from the name of the system.
How does the malware version of ChatGPT work?
WormGPT operates in a manner quite similar to OpenAI’s conversational bot. It is as simple as entering specific instructions, so that it acts accordingly. For example, telling it “I want a spyware program that steals social network passwords”. Or, “develop malware that steals contacts in phonebooks”.
The ChatGPT variant for creating malware writes the malicious applications in Python, one of the most popular programming languages today. It is then possible to refine them, says the source. However, they mention that the tool already delivers results that are as elaborate as they are disturbing.
One of the Dark Web’s wares
ChatGPT’s malicious copy of ChatGPT for creating malware is offered on the Dark Web for a subscription of 60 euros per month. Regardless of the clientele WormGPT gains, its deployment is another sign of the risks associated with the new boom in artificial intelligence, which, we now know, is also conducive to the creation of malicious software.
With OpenAI’s ChatGPT and Google’s Bard as paradigms, AI-based chatbots offer numerous productive functions. They are capable of writing coherent texts, holding natural conversations, tackling creative tasks and writing software code, among other skills. At the same time, relevant dangers are emerging.
Sam Altman, CEO of OpenAI, himself acknowledged the risks. “My biggest fear is causing great harm to the world,” he recently told U.S. lawmakers. A point of view in favor of regulations, which entrepreneurs and industry specialists (including Elon Musk) also demanded by demanding a six-month pause in developments, in order to establish regulations for ethical progress in the area.
With this new “battle front” linked to the manipulation of the GPT-4 language for malware development, it is not surprising that WormGPT is offered on the Dark Web.
The emergence of WormGPT, a malicious clone of OpenAI’s ChatGPT, highlights the potential risks associated with the abuse of artificial intelligence (AI) technology. WormGPT is a system designed for creating malware and conducting illegal activities, offering users without programming skills the ability to develop customized malicious programs.
It operates similarly to ChatGPT, with users providing specific instructions for generating malware, such as spyware or programs that steal sensitive information. The malware is written in Python, a popular programming language, and the tool is being offered as a subscription service on the Dark Web for 60 euros per month.
The existence of WormGPT underscores the need for responsible development and regulation of AI technologies. While AI-based chatbots like ChatGPT have proven to be useful in various productive applications, their potential for misuse and the creation of harmful software is a growing concern.
Industry leaders, including Sam Altman and Elon Musk, have called for regulations and ethical guidelines to ensure the responsible progress of AI.
The discovery of WormGPT serves as a reminder that proactive measures are necessary to address the risks associated with AI and prevent its exploitation for malicious purposes.
What is WormGPT?
WormGPT is a variant of OpenAI’s ChatGPT developed by cybercriminals for malicious purposes. It allows users to generate customized malware without programming skills.
How does WormGPT work?
Similar to ChatGPT, users provide specific instructions to WormGPT for creating malware. It writes the malicious applications in Python and can generate sophisticated and disturbing results.
Where is WormGPT available?
WormGPT is offered as a subscription service on the Dark Web, the illicit part of the Internet known for illegal activities and anonymity.
What are the risks associated with AI-based chatbots like WormGPT?
The emergence of WormGPT highlights the potential risks of AI misuse and the creation of malicious software. It emphasizes the need for responsible development and regulation of AI technologies.
Are there efforts to regulate AI?
Yes, there have been calls from industry leaders, such as Sam Altman and Elon Musk, for regulations and ethical guidelines to ensure the responsible development and use of AI.
What should be done to address the risks of AI exploitation?
Proactive measures, including regulations, ethical guidelines, and responsible development practices, should be implemented to mitigate the risks associated with AI and prevent its misuse for malicious purposes.
Follow us on our social networks and keep up to date with everything that happens in the Metaverse!