WatchGuard Blog

ChatGPT can create polymorphic malware, now what?

Despite the security controls that OpenAI has imposed on ChatGPT to try to make it a secure space capable of assisting users in a variety of tasks, cybercriminals have managed to exploit this technology for malicious purposes.  

Recent research has shown that this generative artificial intelligence is capable of creating a new branch of polymorphic malware with relative ease. The main risk lies in ChatGPT's versatility, which allows it to create code that could easily be used for malware.  

Although it looks complicated, bypassing the content filters that prevent ChatGPT from creating malicious code is really quite straightforward. This expands the pool of cybercriminals capable of creating advanced threats, as it simplifies the processes and eliminates the need for advanced technical knowledge.  By using continuous queries and demanding that the program obey after its first refusal, and using the python API instead of the web version to provide more consistent results and bypass content filters, the researchers found it possible to get ChatGPT to write unique, functional code you could use maliciously. Then they found they could get ChatGPT to mutate the code and thus obtain polymorphic malware that security systems could find difficult to handle and highly evasive.  

Operation and features of polymorphic malware 

Polymorphic malware has become one of the most difficult threats to detect and combat due to its persistence and ability to change its appearance and behavior. Antivirus software—at least those that rely too much on signatures or patterns—struggles to detect it because of its capacity to mutate.  As it hides and evades detection so effectively, it can have a devastating impact on computer systems, steal sensitive information, compromise network security and cause irreparable damage. But what makes polymorphic malware so complex to deal with? 

  • The malware changes appearance each time it is executed: polymorphic viruses are designed to modify their structure and “digital” appearance each time they are executed, completely rewriting their code by encrypting files and modifying signatures accordingly, making them hard to detect by antivirus programs that rely on known virus signatures. 

  • Source code transformation: this malware uses advanced code obfuscation techniques to evade detection such as encryption and decompression techniques or incorporates useless or irrelevant code to make analysis more difficult. 

  • Evasion techniques: polymorphic malware may use sandbox evasion and other circumvention techniques to avoid detection and analysis. 

  • Customization: it can be highly personalized and targeted, making its behavior pattern unique and difficult to detect by programs that rely on suspicious behavior detection. 

To demonstrate what AI-based malware is capable of, a group of researchers built a proof-of-concept (PoC), keylogger-type malware called BlackMamba, generated with ChatGPT that uses Python to randomly modify the program.  

The keylogging capability allows attacker to collect sensitive information from any device and, once obtained, the malware uses a common and trusted collaboration platform to exfiltrate the data collected through a malicious channel, to either sell it on the Dark Web or leverage it in new attacks.  

Thanks to the open-source programming language Python, developers can turn scripts into standalone executable files that can run on multiple operating systems.   

This process demonstrates AI's ability to learn the network environment and recognize security verification patterns, allowing it to execute malware without raising system alerts. 

How to mitigate AI-based malware 

Polymorphic malware already presents a challenge for cybersecurity specialists, but when AI drives it, its complexity and delivery speed becomes even higher, paired with a much lower technical barrier to entry for the threat actor to create it.  While traditional security solutions leverage multi-layered data intelligence systems to combat some of today's most sophisticated threats with automated controls that aim to prevent new or irregular behavior patterns, it’s not that simple in practice. Extended detection and response (XDR) offer a complementary method to protect against these attacks.  

XDR solutions, such as Watchguard's ThreatSync, offer extended visibility, enhanced detection and rapid response by correlating telemetry from different security solutions, which provides security teams with the full context of the threat. This improves efficiency and reduces the risk of falling victim to polymorphic malware as analysts gain better visibility into threats and real-time responsiveness is faster. 

Find out more about XDR and its threat-detection capabilities by visiting the following content: