Understanding where scams are heading gives you time to prepare. Based on current trajectory data, technology developments, and patterns emerging from our analysis of millions of scam reports, here are the trends we expect to define online fraud in late 2026 and into 2027.
Stay ahead of scammers. IsThisAScam.to continuously updates its 6-layer detection to counter emerging threats as they appear.
Trend 1: Autonomous AI Scam Agents
Current AI-assisted scams still require human orchestration — a person directs the AI to write emails, operate chatbots, or clone voices. The next evolution is fully autonomous scam agents: AI systems that independently identify targets, craft personalized approaches, manage ongoing conversations, and handle payment collection with minimal human oversight.
Early versions of these agents are already operating in cryptocurrency scam operations, where AI chatbots autonomously manage "investment advisory" conversations on fake trading platforms. By 2027, we expect autonomous agents to handle the full scam lifecycle for romance scams, tech support scams, and employment fraud.
The implication: scam volume will increase dramatically because the cost per attempt drops to near zero when AI handles everything.
Think it might be a scam?
Paste it here for a free, instant verdict.
Free · No signup required · Cmd+Enter to scan
Trend 2: Real-Time Deepfake Video Calls
In 2026, we saw the first widely reported cases of deepfake video being used in scam calls. By 2027, the technology will be available in consumer-grade applications, making real-time face swapping trivial. This threatens:
- Remote identity verification: Video KYC (Know Your Customer) calls used by banks and financial services become defeatable
- Romance scams: Victims can now "video call" with someone who matches stolen photos
- CEO fraud: Video conference calls with AI-generated executives directing financial transfers
- Impersonation of public figures: Fake video messages from politicians, celebrities, or influencers promoting scams
Countermeasure: new verification protocols will need to move beyond visual and audio confirmation. Challenge-response systems, hardware-based authentication, and biometric liveness detection will become essential.
Trend 3: Scams Targeting AI Users
As AI assistants become embedded in daily life, scammers will target the human-AI interaction layer:
- Prompt injection via email: Sending emails containing hidden instructions that manipulate AI email assistants into revealing sensitive information or performing actions
- Fake AI tool scams: Fraudulent apps and websites claiming to offer GPT-5 access, exclusive AI features, or "jailbroken" AI capabilities — collecting payment for nothing
- AI-generated misinformation for market manipulation: Using AI to flood social media with coordinated fake reports to manipulate stock or crypto prices
Trend 4: Sophisticated Supply Chain Scams
Rather than targeting consumers directly, scammers increasingly target the supply chain — compromising legitimate businesses to reach their customers:
- Compromised e-commerce plugins: Malicious code injected into legitimate Shopify/WooCommerce plugins that skim payment data from real stores
- DNS hijacking: Redirecting traffic from legitimate domains to scam sites by compromising DNS providers
- Compromised SaaS providers: Attacking small software companies that serve larger enterprises, using their access to reach downstream targets
Trend 5: Cross-Platform Scam Journeys
Scams increasingly span multiple platforms, making them harder to trace:
- Initial contact via social media ad
- Conversation moves to WhatsApp or Telegram
- Victim directed to a professional website
- Payment collected via cryptocurrency
- "Customer support" handled by AI chatbot
No single platform sees the full picture. This is why comprehensive analysis tools that evaluate the entire interaction — like IsThisAScam's 6-layer engine — become increasingly important.
Trend 6: Regulatory Responses
Governments are responding to the scam epidemic with new legislation and enforcement:
- AI disclosure laws: Several U.S. states and the EU are implementing requirements to disclose AI-generated content, making it illegal to use AI to impersonate real people without consent
- Bank liability expansion: The UK's mandatory reimbursement scheme for authorized push payment fraud (launched 2024) is being studied by U.S. regulators. Banks may become liable for scam losses, incentivizing better detection
- Platform accountability: Proposed legislation requiring social media platforms and app stores to compensate victims of scams hosted on their platforms
- Telecom enforcement: Stricter penalties for carriers that fail to implement STIR/SHAKEN call authentication
How to Prepare
- Adopt a zero-trust mindset: Verify every communication through a separate channel, regardless of how convincing it appears
- Use layered detection: No single check is sufficient. Combine manual verification with automated tools
- Establish family protocols: Safe words, verification procedures, and agreed-upon processes for financial requests
- Stay informed: Follow scam trend reports (like this one) and subscribe to alerts from the FTC and your state AG
- Use updated tools: IsThisAScam continuously updates its detection models to counter emerging threats
See also: how AI changed fraud in 2026 and the AI tools scammers use.
Received something suspicious? Check it now for free →