Skip to content

I'm the Bad AI: The Misuse of Generative AI in Cyber Attacks

Introduction

Generative AI has unlocked an entirely new level of potential for technology, but this same potential has also caught the attention of nefarious actors in the cyber realm. The misuse of AI tools is enabling cybercriminals to launch complex, targeted attacks with surprising accuracy and persistence. Below we summarize several reports highlighting these trends and suggest ways to use generative AI safely.

Business Email Compromise Attacks Leveraging Generative AI

A new generative AI tool, dubbed 'WormGPT', has been used by cybercriminals to carry out advanced Business Email Compromise (BEC) attacks. By utilizing natural language processing (NLP) and machine learning algorithms, the tool generates highly convincing phishing emails that impersonate trusted figures within organizations. The deception and the ability to deliver mass emails have made it an effective tool for compromising business operations.

Organizations can protect themselves from AI-driven Business Email Compromise (BEC) attacks through continuous training programs that educate employees on BEC threats, AI's role in them, and attacker tactics. Additionally, organizations should implement stringent email verification processes, including alert systems for emails impersonating internal figures and keyword flagging systems for words often associated with BEC attacks. This ensures potential malicious emails are thoroughly examined before any consequential actions are taken.

Theft of AI Credentials on the Dark Web

Another troubling trend involving the misuse of AI has arisen: stolen AI credentials sold on the dark web. Thousands of OpenAI credentials have been compromised and are being used by cybercriminals for malicious purposes. Over 100,000 accounts were stolen by information-stealing malware over the past year. The unauthorized access allows cybercriminals to use the advanced AI chatbot technology to carry out a variety of malicious activities, ranging from phishing campaigns to the spread of misinformation.

Organizations can safeguard against credential stealing by encouraging users to practice good cybersecurity hygiene. This includes using strong complex passwords, enabling 2FA, regularly updating software, avoiding phishing scams, and using secure networks.

Weaponizing Generative AI for Malware

Hackers are now weaponizing generative AI to write malware code. The latest and possibly scariest example 'LLMorpher', is a tool that utilizes OpenAI's GPT capabilities to rewrite its code constantly. Traditional malware copies the malicious functions into infected files; LL Morpher can use an API key to call GPT to rewrite itself every time, thereby avoiding detection by antivirus software. Such advanced tactics underscore the evolving sophistication of the cybercrime landscape. Thankfully, LL Morpher is more a piece of security research than an actual threat, and can be contained for now by blocking the API key.

Organizations can safeguard from malware using advanced threat detection systems, regularly updating and patching systems, and training employees in cybersecurity practices. These core strategies help to identify threats, close vulnerabilities, and enhance overall cybersecurity awareness.

Safeguarding Against Generative AI Exploitation

Improved AI Ethics and Governance

In the face of these challenges, it is essential to bolster AI ethics and governance frameworks. AI developers must incorporate safeguards within the design and operational phases to detect and mitigate potential misuse. Collaborative efforts between academia, industry, and regulators can ensure comprehensive and evolving standards.

AI Transparency and Explainability

AI transparency and explainability should be emphasized to ensure users understand the functioning of AI tools, the data they handle, and their potential risks. This knowledge can help users be more vigilant and avoid falling for cyber tricks.

Enhancing Cybersecurity Measures

Standard cybersecurity measures should be bolstered. Regular audits of AI systems, use of two-factor authentication, and training users to identify suspicious activities are some measures that can significantly reduce the risk of cyberattacks.

Development of AI Countermeasures

There is a growing need for countermeasures specifically designed to tackle AI-enabled threats. This could involve developing AI systems capable of detecting and neutralizing malicious AI behavior, including adversarial attacks and morphing malware.

Conclusion

While generative AI brings significant potential for progress, it's crucial to be aware of the growing risk of its misuse. By adhering to strong ethical guidelines, implementing robust cybersecurity measures, and constantly developing countermeasures, we can hope to harness the power of AI while mitigating potential threats. The efforts to protect AI from exploitation are a collective responsibility that requires the participation of all stakeholders, including developers, users, and policymakers.