WatchGuard Blog

ChatGPT and the dark web: a dangerous alliance

Artificial intelligence (AI) is now present in almost every aspect of our lives. However, its influence is particularly relevant in the field of cybersecurity, where it impacts both defense and attack. While it has become a great tool to protect an organization's digital assets, it has also been weaponized by cybercriminals to spread and execute increasingly sophisticated and difficult to detect cyberattacks.

A recent study reveals that 84% of CEOs are concerned about widespread or catastrophic cybersecurity attacks that could be triggered by the adoption of generative AI. Moreover, an article published last January reveals that there has been an uptick in conversations about the illegal use of ChatGPT, as well as other large language models (LLMs) on the dark web. Discussions focus on a variety of cybersecurity threats but common topics include malware development and other types of illegal use of language models, such as processing stolen user data and parsing files from infected devices.

The fact that AI tools enhance the skills of less advanced cybercriminals, coupled with the boom in sharing tips and techniques on the dark web, means threats such as phishing and ransomware are likely to become an even greater danger to organizations. 

Which AI cyberattack techniques are most commonly deployed?

Despite the efforts of commercial generative AI tools such as ChatGPT to implement barriers to curb the malicious use of this technology, hackers have come up with ways to trick them into ultimately helping them. Similarly, emerging alternatives such as WormGPT, which exploits vulnerabilities, are being weaponized by malicious actors. The most common AI-supported threat techniques include:

  • AI-generated phishing campaigns: Generative artificial intelligence has revolutionized the way hackers generate their phishing campaigns, enabling them to create more credible texts that do not raise alarms and are therefore difficult to detect. This can also save them time, making campaigns more effective. 
  • AI-assisted target research: analysis of social media and other online data using machine learning algorithms allows attackers to gain valuable information about their targets, such as their interests, habits, and vulnerabilities.
  • Intelligent vulnerability detection: AI-enabled reconnaissance tools can automatically search corporate networks for vulnerabilities, automatically selecting the most effective exploit.
  • Intelligent data filtering: during an attack, AI does not copy all available data, but selects only the most valuable information to extract, making it more difficult to detect.
  • AI-powered social engineering: AI can be used to generate deepfake audios or videos that mimic trusted people in vishing attacks. This increases the credibility of the attack and persuades employees to disclose sensitive information.

How to protect your company with advanced endpoint security

The use of generative AI to commit cyberattacks raises the level of complexity, which means more robust defense mechanisms are needed to deal with new threats. Endpoint security plays a key role in this defense and organizations should implement advanced security solutions capable of integrating/incorporating AI capabilities that help prevent, detect and respond to these types of threats. An advanced endpoint security solution that incorporates AI technology can:

  • Detect emerging threats: 

    An advanced endpoint security solution applies techniques such as behavioral analysis and machine learning to identify and block new and evolving malware. In addition, they can help with patching to eliminate potential vulnerabilities and security holes within your network. 

  • Minimize the risk of a data breach: 

    A successful phishing campaign or the use of malware can compromise your company's sensitive data. Advanced endpoint protection helps you prevent sensitive data leaks and protect your information, thus avoiding serious repercussions such as financial loss, erosion of customer confidence, and reputational damage.

  • Contribute to regulatory compliance: 

    Several industries are required by law to use advanced protection to protect against malware. Failure to do so can result in fines and other legal repercussions.

If you would like to learn more about how to protect your business from AI-powered malware, check out the following blog post: 

ChatGPT can create polymorphic malware, now what? 

Share this: