How Generative AI Is Being Used in Cyber Attacks
The new cyber arms race where attackers use AI too
TL;DR
Generative AI is increasingly used in cybercrime to scale phishing, impersonation, and automation
The CrowdStrike 2025 Global Threat Report shows a surge in AI-driven social engineering attacks
Vishing (voice phishing) attacks increased 442% as attackers use AI-generated voices
AI tools help criminals generate convincing emails, deepfakes, fake websites, and synthetic identities
The real risk is scale: AI dramatically lowers the skill barrier for launching cyber attacks
Cyber attacks used to require technical skill, time, and patience. Attackers needed to craft phishing emails manually, write malware by hand, and carefully research targets before launching attacks. That barrier limited how many attacks could happen at once.
Generative AI is removing that barrier.
Security researchers now report that attackers are using AI to automate everything from phishing messages to infrastructure setup. AI tools can generate convincing emails, fake websites, and impersonation scripts in seconds. According to the CrowdStrike 2025 Threat Hunting Report, adversaries are increasingly using generative AI to scale social engineering campaigns and accelerate cyber operations.
The result is not just more attacks. It’s faster attacks. In many cases, attackers can now move through compromised networks in under 30 minutes after gaining initial access, showing how automation is accelerating the entire attack lifecycle.
The Defensive Advantage of AI
The story is not entirely negative. AI is also becoming one of the most powerful defensive tools in cybersecurity.
Security teams are increasingly using AI to analyze threat patterns, detect anomalies, and triage security alerts. Modern security platforms use machine learning models to monitor billions of events across networks and identify suspicious behavior faster than human analysts ever could.
For example, generative AI assistants are now helping security operations centers investigate threats, filter false positives, and prioritize incidents automatically. This dramatically reduces the workload on human analysts and helps teams respond to attacks faster.
Another advantage is threat intelligence. AI systems can analyze huge volumes of attack data to identify emerging patterns across industries. That allows organizations to detect new attack techniques earlier and share defensive strategies more quickly.
In many ways, AI is making cybersecurity teams more powerful than ever before.
The Dark Side: AI as a Cybercrime Multiplier
The same technology that helps defenders is also helping attackers.
Generative AI has become what security researchers call a “force multiplier” for cybercrime, accelerating everything from reconnaissance to persuasion in social engineering attacks.
One of the biggest areas of growth is phishing. Reports show phishing attacks linked to generative AI have surged dramatically in recent years, with some research estimating a 1,265% increase since the rise of generative AI tools.
AI is also enabling entirely new attack methods.
Attackers can now generate realistic voice clones to impersonate executives during phone calls. Deepfake videos can be used in fake meetings to convince employees to transfer money or reveal credentials. In some cases, AI-generated impersonations have been convincing enough to trigger multi-million-dollar fraud incidents.
Another growing threat is automated attack infrastructure. Generative AI can help criminals write malware, build phishing websites, or generate fake identities that pass basic verification checks. According to the CrowdStrike threat hunting research, attackers are using AI to generate phishing content, deepfake impersonations, and even synthetic identities to infiltrate systems.
The biggest change is scale. AI allows attackers to run thousands of campaigns simultaneously, with personalized messages tailored to each target.
My Perspective
The real shift here isn’t that AI invented a new cybercrime. Phishing, impersonation, and fraud have existed for decades.
What AI has changed is the economics of cyber attacks.
Before generative AI, launching convincing phishing campaigns required time, language skills, and research. Now, a criminal can generate perfect emails, personalized scripts, and fake websites almost instantly.
That means the barrier to entry is dropping fast. Someone with very little technical experience can now run campaigns that previously required professional cybercrime groups.
At the same time, AI is also strengthening defensive capabilities. The future of cybersecurity will likely look like AI defending against AI, where automated security systems detect and respond to automated attacks in real time.
The companies that win this race won’t be the ones avoiding AI. They’ll be the ones learning how to use it responsibly and defensively.
AI Toolkit
Olivya — AI voice assistant for automating customer support and business calls.
PulpSense — AI automation systems that streamline growth, hiring, and outbound operations.
Mindcorp AI — Multi-agent AI platform for research, strategy, and complex knowledge work.
Ridvay — AI assistant that automates workflows and surfaces strategic business insights.
Abstra — Python-powered AI workflow engine for automating full business processes.
Prompt of the Day
• Ask an AI model to explain how a modern phishing attack works step-by-step.
• Then ask it to redesign that same attack scenario from the defender’s perspective. What signals would reveal the attack early?
• Next, ask the model how generative AI could help security teams detect deepfakes, phishing campaigns, or impersonation attempts.
• Finally ask: “If AI attackers can scale infinitely, what defensive strategies should organizations adopt to keep up?”


