When AI Writes the Scam: How Attackers Use AI to Scale Phishing, Fraud, and Deepfakes

Phishing, deepfakes, and automation are no longer niche tools. They are the everyday kit of modern attackers, and they scale alarmingly well. For SMBs, that means more convincing messages, faster campaigns, and fewer obvious “tells.” The shift shows up in the numbers. The FBI’s Internet Crime Complaint Center reported more than $16 billion in losses in 2024, a 33% jump from 2023, with phishing and spoofing among the top complaint categories.

How Attackers are Using AI Right Now

Phishing that adapts: Generative models can draft highly personalized emails that borrow tone and context from public data. Attackers are also getting better at slipping past basic controls. In a 2025 snapshot, researchers observed a 17.3% increase in phishing email volume compared with the prior six months and a 57.9% rise in attacks sent from already compromised accounts, including supply chain partners. Obfuscated payloads are trending up as well, with ransomware delivered via phishing up 22.6% over the same period.

Deepfakes that look and sound like you:Voice and video cloning have moved from novelty to operational tool. In one widely reported case, criminals used a deepfake video conference to impersonate executives and persuaded a staff member to transfer roughly $25 million. Incidents like this show how social engineering evolves once attackers can fabricate a face, a voice, and a calendar invite.

Automation that compresses the kill chain: AI helps criminals research victims, generate payload variants, and rotate infrastructure when something gets blocked. The UK’s National Cyber Security Centre assesses that AI will “almost certainly” keep making intrusion operations more effective and efficient through 2027, increasing the frequency and intensity of cyber threats. The barrier to entry drops as capable tools become available “as a service.”

Why This Matters for SMBs

SMBs feel the same threat pressure as large enterprises but often without dedicated security teams. Attackers know it. When phishing improves and deepfake-enabled fraud becomes routine, manual review and “gut feel” will not scale. The cost of being wrong is rising: IBM estimates the average global data breach cost reached $4.88 million in 2024, a figure that easily overwhelms mid-market balance sheets.

What “AI-Enhanced Defense” Actually Looks Like

You do not need to match attackers model-for-model. You do need adaptive, layered controls that exploit your data and automate response. A practical stack for SMBs looks like this:

Phishing-resistant authentication instead of codes by text.

Move beyond OTPs that can be stolen or replayed. NIST and CISA both recommend phishing-resistant MFA such as FIDO2/WebAuthn passkeys, which bind login to the device and site, preventing common MFA-bypass tricks.

AI-driven email and collaboration security.

Use engines that analyze message content, sender behavior, and link redirections in real time. Recent research shows that attacks increasingly come from compromised legitimate accounts and abuse trusted platforms like Google Drive or DocuSign, so reputation-only filtering is not enough.

Endpoint detection and response (EDR) with behavioral models.

Modern EDR should flag unusual process chains, script execution, and credential theft rather than waiting for a known signature. Combine this with application control and allow lists to reduce the attack surface.

Identity and access with risk scoring.

Pair least privilege with continuous assessment. If a login pattern deviates, step up authentication using phishing-resistant methods rather than blocking legitimate work.

Data-layer controls and tested recovery.

Encrypt sensitive data, monitor exfiltration attempts, and maintain immutable, regularly tested backups to blunt ransomware. The FBI data and ransomware trends underline that recovery readiness is as important as prevention.

Human-in-the-loop training that mirrors today’s attacks.

Simulations should include AI-written lures, QR code attacks, and fake meeting invites. The goal is pattern recognition and pause habits, not blame. As deepfakes spread, build verification rituals for high-risk requests, such as out-of-band callbacks to known numbers before wiring funds. The Hong Kong case is the cautionary tale.

A simple operating model to keep pace

Instrument first. Turn on advanced logging in email, identity, and endpoint tools. AI works best with good telemetry.
Automate the obvious. Auto-quarantine suspicious attachments, auto-expire risky sessions, and auto-require phishing-resistant re-authentication under defined conditions.
Harden your crown jewels. Apply passkeys and conditional access to finance apps, admin consoles, and anything that can move money or change identity settings.
Practice decision drills. Run short exercises for “urgent payment request,” “surprise vendor change,” or “CEO on video asks for secrecy.” The objective is to normalize friction for critical actions.

The Takeaway

AI is making cybercrime faster, cheaper, and more convincing. The good news is that the same technology can tip defense in your favor if you lean into layered controls that adapt in real time. Start with phishing-resistant MFA, behavior-based detection, and recovery you trust. Then keep iterating. The gap between organizations that adopt AI-enhanced security and those that do not will widen, and attackers will notice.