5 cyber threats that criminals can generate with the help of ChatGPT
ChatGPT, the public generative AI that came out in late November 2022, has raised legitimate concerns about its potential to amplify the severity and complexity of cyberthreats. In fact, as soon as OpenAI announced its release many security experts predicted that it would only be a matter of time before attackers started using this AI chatbot to craft malware or even augment phishing attacks.
And it has not taken long for their suspicions to be confirmed, as it has been discovered that cybercriminals have already started to use this tool based on the GPT-3 AI language model to recreate malware strains and perpetrate different types of attacks. Cybercriminals simply need to have an OpenAI account, which they can create free of charge from its website, and then make a query.
What can cybercriminals do with ChatGPT ?
Attackers can leverage ChatGPT's generative artificial intelligence to craft malicious activity, including:
Threat actors can use the ChatGPT system's Large Language Model (LLM) to move away from universal formats and automate the creation of unique phishing or spoofing emails, written with perfect grammar and natural speech patterns tailored to each target. This means that email attacks crafted with the help of this technology look much more convincing, making it harder for recipients to detect and avoid clicking on malicious links that may contain malware.
- Identity theft:
In addition to phishing, bad actors can make use of ChatGPT to impersonate a trusted institution, thanks to the AI's ability to replicate the corporate tone and discourse of a bank or organization, and then exploit these messages on social media, SMS or via emails to obtain people's private and financial information. Malicious actors can also write social media posts posing as celebrities by exploiting this capability.
- Other social engineering attacks:
Social engineering attacks can also be launched where actors use the model to create fake profiles on social media, making them look very realistic, and then trick people into clicking on malicious links or persuade them to share personal information.
- Creation of malicious bots:
ChatGPT can be used to create chatbots, as it has an API that can feed other chats. Its user-friendly interface, designed for beneficial uses, can be used to trick people and run persuasive scams, as well as to spread spam or launch phishing attacks.
ChatGPT can help perform a task that usually requires high-level programming skills: generating code in various programming languages. The model enables threat actors with limited technical or no coding skills to develop malware. ChatGPT writes it simply by knowing which functionality the malware should have.
In turn, sophisticated cybercriminals could also use this technology to make their threats more effective or to close existing loopholes. In one case shared by a criminal on a forum, ChatGPT was exploited to create malware using a Python-based code that can search, copy and exfiltrate 12 common file types, such as Office documents, PDFs and images from an infected system. In other words, if it finds a file of interest, the malware copies it to a temporary directory, compresses it and sends it over the web. The same malware author also showed how he had used ChatGPT to write Java code to download the PuTTY SSH and telnet client and covertly run it on a system via PowerShell.
Advanced threats require advanced solutions
Advanced cyberthreats must be met with solutions that are up to the task. WatchGuard EPDR combines endpoint protection (EPP) and detection and response (EDR) capabilities in a single solution. Thanks to its new and emerging AI models of machine learning and deep learning, WatchGuard EPDR protects against advanced threats, advanced persistent threats (APTs), zero day malware, ransomware, phishing, rootkits, memory vulnerabilities and malware-free attacks, in addition to providing complete endpoint and server visibility and monitoring and detecting malicious activity that can evade most traditional antivirus solutions.
It also continuously monitors all applications and detects malicious behavior, even if it originates from legitimate applications. It is also capable of orchestrating an automated response and, in turn, providing the necessary forensic information to thoroughly investigate each attack attempt through advanced indicators of attack (IoA).
In short, the innovation of a tool like ChatGPT can be positive for the world and change current paradigms, but it can also do serious harm if it falls into the wrong hands. Having the right cybersecurity solution in place can prevent the negative side of promising tools like this one from reaching your organization through the misuse bad actors can make of them.
If you want to learn more about ChatGPT and the potential cybersecurity risks of misuse, you can do so by listening to these two podcasts from our WatchGuard Threat Lab team: