IsThisAScam
EtusivuBlogHinnoitteluTietoaHistoryAPI
Upgrade
FI
Sign in
Sign in
IsThisAScam

Independent scam & phishing analysis. Free for individuals. APIs for developers.

Operated by Zeplik, Inc.
Tuote
  • Home
  • Blog
  • Pricing
  • Tietoa
  • History
Resources
  • API-dokumentaatio
  • Phishing brief
  • Romance scams
  • Tech support
Oikeudelliset tiedot
  • Tietosuojakäytäntö
  • Käyttöehdot
  • product@zeplik.com

© 2026 Zeplik, Inc. Kaikki oikeudet pidätetään.

Built for the calm, the cautious, and the careful.

Home/Blog/Industry News
Industry News

AI-Generated Scams: How ChatGPT Changed Fraud

IsThisAScam Research TeamMarch 18, 20264 min read
Contents
  1. The Before and After
  2. Phishing Emails: Then
  3. Phishing Emails: Now
  4. Five Ways Scammers Use AI in 2026
  5. 1. AI-Written Phishing at Scale
  6. 2. Voice Cloning for Phone Scams
  7. 3. Deepfake Video for Identity Fraud
  8. 4. AI Chatbots for Romance and Investment Scams
  9. 5. AI-Generated Fake Websites and Reviews
  10. The Arms Race: AI vs. AI
  11. How to Protect Yourself

The democratization of AI has been transformative for legitimate technology — and devastating for cybersecurity. Since the widespread availability of large language models, voice cloning tools, and image generators beginning in 2023, the quality and scale of online scams have undergone a fundamental shift. The era of poorly written Nigerian prince emails is over. Today's scams are fluent, personalized, and increasingly indistinguishable from legitimate communications.

Think you received an AI-crafted scam? IsThisAScam.to uses AI detection as one of its 6 analysis layers — paste any message for a free check.

The Before and After

Phishing Emails: Then

"Dear Valued Costumer, You're acount has been compromized. Pleaze click here to verifiy you're informaton or you're acount will be suspneded." — Typical pre-AI phishing email

These were easy to spot. Grammar errors, misspellings, and awkward phrasing were reliable indicators of fraud.

Phishing Emails: Now

"We noticed a sign-in attempt to your Amazon account from a device in Bucharest, Romania. If this was not you, your account has been temporarily limited for your protection. Please review your recent activity and confirm your identity to restore full access." — AI-generated phishing email, 2026

No spelling errors. No grammar issues. Proper tone, accurate brand voice, and contextually appropriate urgency. AI language models generate phishing emails that are grammatically perfect and stylistically matched to the brand being impersonated.

Think it might be a scam?

Paste it here for a free, instant verdict.

Free · No signup required · Cmd+Enter to scan

Five Ways Scammers Use AI in 2026

1. AI-Written Phishing at Scale

Before AI, a scammer needed English language skills to write convincing phishing emails. Now, anyone with access to a language model can generate thousands of unique, grammatically perfect phishing emails in any language. Each email can be personalized with the target's name, company, recent purchases, or interests (scraped from social media).

The scale is staggering. Security firm SlashNext reported a 1,265% increase in phishing emails since ChatGPT's release, with AI-generated emails achieving click rates 3x higher than manually written phishing emails.

2. Voice Cloning for Phone Scams

AI voice cloning tools can replicate a person's voice from as little as 3 seconds of audio. Scammers extract voice samples from social media videos, voicemail greetings, YouTube content, or podcast appearances. The cloned voice is then used in real-time phone calls.

In February 2026, a Hong Kong company lost $25 million when scammers used AI-cloned voices of multiple executives in a video conference call, instructing a finance employee to transfer funds. The employee believed they were speaking to their CFO and CEO.

For individuals, the most common attack is the "grandparent scam" upgrade — where a scammer calls elderly victims using a clone of their grandchild's voice, claiming to be in trouble and needing money urgently.

3. Deepfake Video for Identity Fraud

AI-generated video has reached the point where real-time deepfakes can defeat video verification systems. In March 2026, researchers demonstrated bypassing bank KYC (Know Your Customer) video checks using deepfake technology. Scammers use this to open fraudulent accounts, pass identity verification for loans, and conduct video-call investment scams where the "fund manager" is an AI-generated face.

4. AI Chatbots for Romance and Investment Scams

Romance scammers traditionally juggle dozens of victims simultaneously. AI chatbots now handle the relationship-building phase — engaging targets in convincing text conversations 24/7 with consistent personality, remembering details, and escalating emotional intimacy. The human scammer only intervenes for the money request.

Investment scams use AI chatbots as "financial advisers" on fake trading platforms, answering questions about strategies, returns, and market conditions with convincing fluency.

5. AI-Generated Fake Websites and Reviews

Creating a convincing fake e-commerce store used to require web design skills. Now, AI generates professional product descriptions, about pages, privacy policies, and even fake customer reviews. Combined with AI-generated product images, scammers can spin up hundreds of fake stores with minimal effort.

The Arms Race: AI vs. AI

As scammers adopt AI, detection systems must evolve. IsThisAScam's 6-layer detection engine includes AI-content analysis — examining messages for the statistical patterns that distinguish AI-generated text from human writing. Additional detection approaches include:

  • Linguistic analysis: AI-generated text has measurable statistical properties (perplexity, burstiness) that differ from human writing
  • Voice authentication: Analyzing audio for artifacts introduced by voice cloning algorithms
  • Behavioral analysis: AI chatbots respond differently than humans to unexpected questions, topic changes, or requests for video calls
  • Metadata analysis: Regardless of how perfect the content is, phishing emails still fail authentication checks (SPF, DKIM, DMARC) and link to malicious domains

How to Protect Yourself

  1. Verify through separate channels. If someone contacts you — by email, phone, or message — verify their identity by contacting them through a known, separate channel.
  2. Use family safe words. Establish a code word with close family members that an AI clone would not know.
  3. Do not trust audio or video alone. Video calls and phone calls are no longer proof of identity.
  4. Check metadata, not just content. Perfect grammar does not mean legitimate. Check email headers, domain age, and URL authenticity.
  5. Use automated detection tools. Human judgment alone is increasingly insufficient. IsThisAScam combines multiple detection layers to catch what humans miss.

See also: detailed breakdown of AI scam tools and what's coming in 2027.

Received something suspicious? Check it now for free →

Share this article
XLinkedInFacebookWhatsApp
AI scamsdeepfakeChatGPTvoice cloningAI fraud
Related Articles
Security Tips4 min

How to Recognize Deepfakes: Video, Audio, Image

Industry News4 min

How Scammers Use AI: Voice Cloning, Deepfakes, Chatbots

Industry News3 min

Emerging Scam Trends: What's Coming in 2027

Check any suspicious message

Six detection layers. Instant verdict. Free.

Free · No signup required · Cmd+Enter to scan