Blog WatchGuard

GhostGPT, the new malicious chatbot and its impact on cybersecurity

At this juncture, it is no surprise that cybercriminals are leveraging the potential of generative artificial intelligence to strengthen their attacks. However, the emergence of new models specifically designed to generate threats quickly has made this task even easier for bad actors.

At the end of 2024, researchers discovered a new AI chatbot created for cybercriminal purposes. This model, called GhostGPT, arms cybercriminals with unprecedented capabilities, allowing them to develop sophisticated phishing emails and malware with ease and at a speed that would have been unthinkable just a few years ago.

This is possible because it operates without restrictions that limit other models like ChatGPT, which are subject to ethical guidelines and security filters designed to block malicious requests. It is likely to be a wrapper—an interface or additional layer placed over a pre-existing AI model—connected to a jailbroken version of ChatGPT or an open-source LLM, which removes ethical safeguards from the equation.

4 main risks this chatbot poses to companies

GhostGPT does not log user activity, prioritizing anonymity. This is particularly attractive to malicious actors who want to stay under the radar while using the chatbot. Its accessibility and lack of controls make it an extremely dangerous tool, capable of automating and accelerating illicit tasks that previously required more skill or time. Cybercriminals can use it to generate the following:

  1. Personalized and mass phishing: GhostGPT can craft persuasive and personalized emails, mimicking the most suitable tone and style based on the victim’s context. It also allows an attacker to generate hundreds of customized variations in just a few minutes, boosting the reach and speed of attacks. To counter this, organizations can consider providing phishing awareness training to employees, assuming they have the necessary resources. Such training helps individuals recognize and respond to phishing attempts, reducing the likelihood of successful attacks.
  2. Credential theft and unauthorized access: GhostGPT also makes credential theft much easier to perpetrate. With just a simple prompt, it can generate fake login pages that are nearly indistinguishable from real ones. Bad actors can then use these fakes in phishing campaigns.
  3. Polymorphic malware and malicious code: This tool's ability to write malicious code on demand puts the creation of basic malware—and even functional ransomware—within the reach of unskilled cybercriminals. The risk posed by AI-generated polymorphic malware constantly changing its code to evade antivirus detection is particularly concerning.
  4. Attack strategy optimization and guidance: This chatbot can also advise hackers, providing detailed instructions on how to carry out more effective attacks. For instance, it offers guidance on how to set up command-and-control servers for malware campaigns, bypass security solutions, or exploit specific vulnerabilities.

How to protect yourself from these attacks

It may seem like all is lost, given this evolving scenario. However, there are measures companies can deploy to mitigate the risks and prevent a chatbot like GhostGPT from becoming a cybersecurity threat. This requires combining good security practices with advanced technological solutions.

For starters, keeping systems updated and applying Zero Trust principles to reduce the attack surface is essential. From a technological standpoint, implementing multi-factor authentication (MFA) strengthens access protection, while using DNS filters helps prevent phishing attacks.

Next, adopting AI-based detection tools, such as Endpoint Detection and Response (EDR) or Extended Detection and Response (XDR) solutions, improves the ability to identify anomalies caused by automated attacks. Similarly, threat intelligence solutions help anticipate new tactics and update defenses in real-time.

In short, by implementing a layered security approach and integrating defensive AI, companies can more effectively withstand attacks generated by GhostGPT and other malicious AI systems.

If you want to learn more about the use of AI in cybersecurity, don’t miss the following posts on our blog: