Hello Fraud Fighters!
This week the FBI out their annual IC3 fraud report with the 2025 stats. And oh what a shocker… fraud is up. All of it. Investment fraud, BEC, tech support scams. All of the things.
The one that really gets my goat is elder abuse - the average (AVERAGE!) fraud loss was $38,500. Absolutely reprehensible, and it’s getting worse. The number of incidents is 59% higher than in 2024.
Meanwhile, synthetic identities just became the fastest-growing fraud type on the planet according to LexisNexis, a British energy company got BEC'd for £700,000, and Google DeepMind published a paper that should be required reading for anyone deploying AI agents in a payment workflow.
Let's get into it.
Big Story: $20.9 Billion. One Million Complaints. The FBI's 2025 Fraud Report Is Here.
The FBI's Internet Crime Complaint Center released its 2025 Annual Report this week, and for the first time in IC3's 25-year history, the complaints crossed one million. Total reported losses hit $20.877 billion — up 26% from 2024, up nearly 400% from $4.2 billion in 2020. Every metric that matters moved in the wrong direction.
The headline numbers are staggering enough on their own. But the composition of the losses is where fraud teams should be paying close attention. Investment fraud led all categories at $8.65 billion — more than 40% of all losses. Business email compromise came in second at $3.05 billion. Tech support scams rounded out the top three at $2.1 billion. These aren't emerging threat categories. They're well-understood, well-documented, and still accounting for nearly $14 billion in losses between them.
The elder fraud numbers are genuinely alarming. Americans over 60 filed 201,000 complaints with losses totaling $7.748 billion — a 59% jump from 2024. The average loss per victim in this cohort was $38,500, nearly double the overall average. A total of 12,444 seniors each lost more than $100,000. Investment fraud accounted for $3.5 billion of that alone, with tech support scams and romance fraud piling on behind it. For any bank or fintech with an older customer base, this data should be pinned to the wall of every fraud team standup.
The most editorially significant move in this year's report: the FBI named AI as a discrete crime category for the first time. The IC3 received 22,364 AI-related complaints, costing Americans $893 million — voice cloning, deepfake video impersonation of public figures and family members, AI-generated phishing content, and fake identification documents all called out explicitly. To note: the bureau is now tracking this separately, which means regulators and examiners will be looking for your controls to match.
Operation Level Up — the FBI's proactive initiative to identify active crypto investment fraud victims and warn them before their losses compound — notified 3,780 victims last year. 78% were unaware they were being scammed at the time of contact. In one case, an agent stopped a victim from cashing out $750,000 from his 401(k). In another, a victim was mid-process on selling her house to fund a $500,000 "investment." The program has saved an estimated $225 million in 2025 alone and triggered 38 referrals for suicide intervention. The scale of the psychological damage here goes well beyond the financial numbers
The so what for operators: the IC3 report is a floor, not a ceiling. The FBI itself acknowledges that victims frequently don't report — out of embarrassment, uncertainty, or simply not knowing where to go. The $20.9 billion is what made it into the system. The actual number is higher. If your institution's fraud losses aren't moving proportionally to these national trends, it's worth asking whether your detection rates are improving or your reporting is lagging.
Our friends at Safeguard are hosting their inaugural AI Deepdive Retreat for leaders in fraud, compliance, and identity. Registration has officially closed, but readers of This Week in Fraud still have a final chance to apply. Late registrants can submit their application by April 16. Qualified practitioners receive a complimentary pass plus $1,500 in travel reimbursement. May 3–6 at The Broadmoor in Colorado Springs.
Apply Before April 16 → Last chance to join Safeguard
Quick Hit #1: Synthetic Identity Fraud Just Went Global
LexisNexis Risk Solutions released its 2026 Cybercrime Report this week, drawn from analysis of 116 billion online transactions in 2025. The headline: synthetic identity fraud increased eight-fold year-over-year. It now accounts for 11% of all reported fraud globally, making it the fastest-growing fraud type on the planet.
Until recently, synthetic identity fraud — where criminals stitch together real and fabricated data to construct a plausible fake person — was considered a predominantly American problem, concentrated in credit and lending. That's no longer true. Latin America now accounts for 48% of synthetic identity fraud in its region, driven by the explosive growth of regulated gaming and e-commerce. Generative AI is providing the manufacturing engine: fraudsters are using it to generate supporting documentation, historical credit backstory, and behavioral patterns that make synthetic identities look seasoned rather than freshly minted.

The report also flags two other vectors worth watching. Malicious bot attacks rose 59% in 2025, with bots now sophisticated enough to mimic human cursor movements convincingly enough to defeat behavioral biometrics. And agentic AI traffic — autonomous agents interacting with digital platforms — surged 450% between January and December. LexisNexis notes no current evidence of malicious intent in that agentic traffic, but flags it explicitly as a detection challenge: agents produce a distinct digital signature that existing fraud models weren't built to classify. That's a gap that will be exploited.
Quick Hit #2: Zephyr Energy Loses £700K to BEC — "Industry Standard Practices" Were Not Enough
Zephyr Energy, a London-listed oil and gas company, disclosed this week via a regulatory filing with the London Stock Exchange that a hacker stole £700,000 — close to $900,000 — from a U.S.-based subsidiary by intercepting a contractor payment and redirecting it to an attacker-controlled account. Classic business email compromise.
What stands out is the company's own characterization: they used "industry standard practices" for their technology and payment systems. The Register noted that Zephyr has since implemented "additional layers of security."

This is an objective lesson in why "industry standard" is not the same as "sufficient." BEC accounted for $3.05 billion in losses in 2025 according to the IC3 report flagged earlier. The attack vector is not novel but victims keep falling for it because out-of-band payment verification — a phone call to a known number, a callback policy for any bank account change, a secondary approval threshold — are still treated as inconvenient friction rather than mandatory hygiene. In summary: Zephyr lost nearly a million dollars because someone changed a routing number in an email. They won’t by any means be the last.
Quick Hit #3: Google DeepMind Maps How AI Agents Get Hijacked
If your institution is deploying AI agents to handle any part of a financial workflow — payments, vendor management, customer communications — a new paper from Google DeepMind published this week deserves your immediate attention. Titled "AI Agent Traps," it's the first systematic framework for how malicious web content can be engineered to manipulate, deceive, and exploit autonomous agents — not by attacking the model directly, but by corrupting the environment the agent operates in.

The taxonomy is six categories deep: content injection traps (hidden instructions embedded in HTML that humans never see but agents process as commands), semantic manipulation (framing and bias attacks that skew agent reasoning without issuing overt instructions), cognitive state attacks (poisoning the retrieval databases agents use for memory), behavioral control traps (coercing agents into data exfiltration or spawning attacker-controlled sub-agents), systemic traps (coordinated attacks across entire agent networks), and human-in-the-loop traps that target the human supervisor watching the agent rather than the agent itself.
The numbers are quite something. Simple prompt injections embedded in ordinary web content partially hijack agents in up to 86% of tested scenarios. Memory poisoning achieves success rates above 80% with less than 0.1% data contamination — while leaving benign behavior largely intact, making detection extremely difficult. In documented tests against Microsoft M365 Copilot, a single crafted email caused the system to bypass internal classifiers and leak its full privileged context to an attacker-controlled endpoint.
The paper also surfaces a liability question nobody has answered yet: if a compromised AI agent executes a fraudulent payment, who is legally responsible — the operator, the model provider, or the domain that served the malicious content? Right now, the answer is nobody knows.
Quick Hit #4: China Is Cracking Down on Scams. Just Not the Ones Targeting You.
WIRED and The Record from Recorded Future News this week covered congressional testimony laying out what fraud professionals have long suspected but rarely seen stated so bluntly by U.S. officials: China's crackdown on Southeast Asian scam compounds is real, but deliberately selective. Beijing moves against operations that target Chinese citizens. The ones targeting Americans are left largely alone.
As Chinese authorities intensified domestic enforcement, online scam losses inside China dropped roughly 30%. Over the same period, U.S. losses from the same networks rose approximately 40%. Americans are now, in the words of U.S. officials, "among the top targets" of China-linked scam centers. The DOJ's Scam Center Strike Force — stood up just months ago — now has more than 150 personnel across the country and has frozen over $578 million in cryptocurrency to date.
The U.S.-China Economic and Security Review Commission went further in a recent report, documenting a new domestic twist: Chinese criminals arrested in earlier crackdowns are being released and setting up smaller-scale operations inside China itself — targeting foreigners exclusively. The Chinese term for this is "foreigner butchering." The structural conclusion is grim: China's enforcement posture has created an incentive structure that actively redirects criminal capacity toward American victims. For institutions with high concentrations of customers targeted by romance or investment scams, the operational environment just got more documented, if not more manageable.
This Week in Fraud is a publication for fintech operators, fraud teams, and risk professionals. Have a tip or story? Drop Nick Holland a note at [email protected]



