IsThisAScam
首页Blog定价关于HistoryAPI
Upgrade
ZH
Sign in
Sign in
IsThisAScam

Independent scam & phishing analysis. Free for individuals. APIs for developers.

Operated by Zeplik, Inc.
产品
  • Home
  • Blog
  • Pricing
  • 关于
  • History
Resources
  • API文档
  • Phishing brief
  • Romance scams
  • Tech support
法律
  • 隐私政策
  • 服务条款
  • product@zeplik.com

© 2026 Zeplik, Inc. 保留所有权利。

Built for the calm, the cautious, and the careful.

Home/Blog/Industry News
Industry News

How Scammers Use AI: Voice Cloning, Deepfakes, Chatbots

IsThisAScam Research TeamMay 10, 20264 min read
Contents
  1. Voice Cloning
  2. How It Works
  3. How Scammers Use It
  4. How to Defend
  5. Deepfake Video
  6. How It Works
  7. How Scammers Use It
  8. How to Defend
  9. AI Chatbots for Social Engineering
  10. How It Works
  11. How Scammers Use It
  12. How to Defend
  13. AI-Generated Content at Scale
  14. How Scammers Use It
  15. The Detection Arms Race

Artificial intelligence did not create online scams, but it has supercharged them. What once required skilled criminals now requires only an internet connection and access to freely available AI tools. This article provides a technical breakdown of exactly how each AI technology is being weaponized by scammers in 2026 — not to enable fraud, but because understanding the tools helps you defend against them.

Suspect you are dealing with AI-generated fraud? IsThisAScam.to includes AI content detection in its 6-layer analysis.

Voice Cloning

How It Works

Modern voice cloning requires as little as 3-10 seconds of reference audio. The AI model learns the speaker's vocal characteristics — pitch, tone, cadence, accent, breathing patterns — and generates new speech in that voice from any text input. Tools like ElevenLabs, Resemble.AI, and open-source alternatives like Tortoise-TTS and VALL-E make this accessible to anyone.

How Scammers Use It

  • Family emergency scams: Cloning a family member's voice from social media videos or voicemail greetings, then calling relatives claiming to be in distress (arrested, kidnapped, in an accident) and needing money urgently
  • CEO fraud: Cloning an executive's voice from earnings calls, interviews, or conference presentations, then calling employees to authorize wire transfers
  • Customer service impersonation: Cloning the voices of known customer service representatives to redirect payments or extract account information

How to Defend

  • Establish family safe words that an AI would not know
  • Always verify by calling the person back on their known number
  • Listen for artifacts: current cloning tech sometimes produces slight metallic quality, unnatural pauses, or inconsistent background noise
  • Ask contextual questions only the real person would know: "What did we have for dinner last night?"

Think it might be a scam?

Paste it here for a free, instant verdict.

Free · No signup required · Cmd+Enter to scan

Deepfake Video

How It Works

Face-swapping technology uses deep learning models (typically Generative Adversarial Networks or diffusion models) to replace one person's face with another in video — including in real-time. Software like DeepFaceLab and commercial tools enable live deepfake video during video calls.

How Scammers Use It

  • Video conference fraud: The Hong Kong case where scammers used deepfake video to impersonate multiple executives in a video call, directing a $25 million transfer
  • Romance scam verification: When a suspicious victim asks for a video call to verify identity, deepfake technology allows the scammer to appear as the person in the stolen photos
  • KYC bypass: Using deepfake video to pass identity verification for bank accounts, crypto exchanges, and financial services
  • Fake celebrity endorsements: AI-generated videos of celebrities promoting investment schemes or products

How to Defend

  • Ask the person on a video call to perform unexpected actions: turn their head quickly, hold up a specific number of fingers, touch their ear — deepfakes can glitch with rapid unexpected movements
  • Look for visual artifacts: blurring at the edges of the face, inconsistent lighting, weird reflections in glasses, mismatched lip sync at the edges
  • Do not rely on video calls alone for high-stakes verification

AI Chatbots for Social Engineering

How It Works

Large language models (LLMs) generate human-like text responses. Scammers use these — either through API access or fine-tuned models — to automate the conversational phase of scams that traditionally required constant human attention.

How Scammers Use It

  • Romance scam automation: AI chatbots manage the weeks-long relationship-building phase across dozens of simultaneous victims. The bot maintains consistent personality, remembers previous conversations, and gradually builds emotional connection
  • Investment scam advisors: AI "financial advisors" on fake trading platforms answer questions about strategies, explain market movements, and encourage larger deposits
  • Tech support chatbots: AI-powered "support agents" on fake tech support sites that guide victims through installing remote access software
  • Customer service impersonation: AI chatbots that mimic company support channels, collecting account credentials and personal information

How to Defend

  • Ask off-topic questions: AI chatbots handle unexpected context shifts differently than humans
  • Request a video call or phone call: AI chatbots are text-only (though this defense is eroding with voice AI)
  • Look for patterns: overly smooth responses, never making typos, always available (humans sleep and have off days)
  • Be suspicious of anyone who is consistently "perfect" in text communication

AI-Generated Content at Scale

How Scammers Use It

  • Fake product reviews: AI generates hundreds of unique, convincing 5-star reviews for scam products on Amazon, Trustpilot, and Google
  • Fake news articles: AI-generated "news coverage" of fake companies or investment opportunities, hosted on legitimate-looking news sites
  • Scam website content: AI generates unique About pages, privacy policies, product descriptions, and FAQs for disposable scam stores
  • Social media personas: Complete social media profiles with AI-generated photos, bios, and post histories

The Detection Arms Race

As AI improves scam quality, AI also improves detection. IsThisAScam's 6-layer analysis includes AI-content detection that examines:

  • Statistical text patterns (perplexity, burstiness) that differentiate AI-generated from human-written text
  • Image analysis for AI-generation artifacts
  • Behavioral patterns across reported interactions
  • Domain and infrastructure patterns consistent with AI-scaled operations

The key insight: while AI makes the content more convincing, scam operations still leave infrastructure fingerprints — new domains, failed authentication checks, known-malicious hosting providers — that automated analysis catches regardless of content quality.

For broader context, see how ChatGPT changed fraud and what's coming in 2027.

Received something suspicious? Check it now for free →

Share this article
XLinkedInFacebookWhatsApp
AIvoice cloningdeepfakechatbotsAI fraud
Related Articles
Security Tips4 min

How to Recognize Deepfakes: Video, Audio, Image

Industry News4 min

AI-Generated Scams: How ChatGPT Changed Fraud

Scam Alerts4 min

AI Voice Cloning Scams: How Scammers Use AI to Impersonate Your Family

Check any suspicious message

Six detection layers. Instant verdict. Free.

Free · No signup required · Cmd+Enter to scan