Hello Fraud Fighters,
Welcome back to This Week in Fraud. This week, INTERPOL put a price tag on the global fraud problem: 442 billion dollars in losses in 2025 alone. Oof.
At the same time, scam compound victims now span nearly 80 countries, criminals are renting cloud phones to neutralize device fingerprinting, and the Bank of London just learned what it costs to submit fabricated documents to its regulator.
Let’s get into it.
SPONSORED
Safeguard is hosting their inaugural AI Deepdive Retreat for 500+ fraud, compliance, and identity practitioners and builders from May 3–6 at The Broadmoor in Colorado Springs. Practitioners get a complimentary ticket and $1,500 in travel reimbursement. Spots are limited and registration closes April 10th.
Big Story: INTERPOL’s 442 Billion Dollar Fraud Bill
In March 2026, INTERPOL released the second edition of its Global Financial Fraud Threat Assessment. Global financial fraud losses in 2025 are estimated at $442 billion, a realized loss figure for last year rather than a forward-looking projection. It’s such a huge figure that they may as well have published eleventy gajillion, it’s that staggeringly hard to fathom.
INTERPOL now ranks financial fraud alongside cocaine trafficking, synthetic drug trafficking, heroin trafficking, and money laundering as one of the five most serious global crime threats. It is fraud, not generic “cybercrime” or terrorism, that sits on that list.
AI runs through the entire report like a stick of Blackpool rock. INTERPOL estimates that fraud schemes using AI tools are 4.5 times more profitable than those that do not which probably comes as a surprise to absolutely no one. And, agentic systems are already handling the entire campaign lifecycle — reconnaissance, credential harvesting, system infiltration, and personalized ransom communication — without direct human control. Deepfake-as-a-Service offerings on dark web markets sell full synthetic identity kits, including video avatars, cloned voices, and biometric data, which has sharply lowered the barrier to entry for sophisticated fraud operations.

Blackpool rock (dentist’s nemesis candy from Northern England)
Scam center data shows how far this model has scaled. What began as a largely Southeast Asian issue — compounds in Myanmar, Cambodia, and Laos forcing trafficked workers to run pig-butchering and investment scams — have become a global operation. By late 2025, victims trafficked into scam centers represented nearly 80 nationalities, up from 66 in the first quarter alone. New hubs have been identified in the Middle East, North Africa, South America, and West Africa. One Zambia-based operation is estimated to have defrauded 65,000 victims of roughly 300 million dollars. In West and Central Africa, a local variant has emerged: pyramid-style frauds run over mobile messaging apps, recruiting unemployed 16–25-year-olds and pushing them to target their own families and communities.
Global countermeasures have been significant, however against 442 billion dollars in estimated losses, 439 million dollars recovered is well under one‑tenth of one percent. INTERPOL’s forward risk assessment is explicit: global financial fraud risk is HIGH, with a material escalation expected over the next three to five years, driven primarily by wider AI availability and low entry barriers for operators.
For fintech operators:
Fraud is now a supply chain business. Point solutions focused on isolated attack types cannot keep up with a modular, cross‑border fraud supply chain.
Traditional education campaigns and rule-based detection approaches have largely stopped working in the face of modern tooling. Fraud advice that was effective in 2020 no longer holds in 2026.
Regulatory guidance is specific and actionable: update KYC and AML processes, implement real-time transaction monitoring, and criminalize the malicious use of generative tools for impersonation and large‑scale voice cloning.
Source: INTERPOL
Quick Hit #1: Virtual Phone Farms vs Device Fingerprinting
Device fingerprinting has been one of the more dependable tools for detecting account takeover, tying accounts to known devices and flagging anomalies. New research from Group‑IB shows that criminals have scaled a systematic workaround — and it has been available on commercial platforms for years. Cloud‑hosted virtual Android devices, rentable for 10 to 50 cents per hour, can closely mimic real devices, including hardware attestation, sensor behavior, time-zone, and network routing. Infrastructure originally designed for app testing and game bot‑farming has been fully repurposed into fraud tooling.

“I only wanted to shop for Labubus…”
Operators are “pre‑warming” these cloud phones by installing banking apps, registering accounts, and running small legitimate transactions over time. By the time an account takeover attempt happens, the underlying device profile looks stable and low risk. A secondary market has already formed: darknet vendors are offering pre‑warmed Revolut and Wise accounts created on cloud phones for 50 to 200 dollars each, often with bundled ongoing access to the underlying virtual device.
Operationally, this means device fingerprinting cannot be the only control layer. Institutions need to embed behavioral analytics that compare in‑session actions to historical patterns, not just evaluate device trust. A three‑year‑old account with a clean history that suddenly initiates a large outbound transfer immediately after receiving an inbound OTP voice call should be treated as high‑risk regardless of how “familiar” the device looks.
Source: Group‑IB
Quick Hit #2: When the “AI Model” Is a Human Being
Scam compounds are adding a new specialist role to their operations: “AI models.” Researchers following Southeast Asian fraud hubs have documented compounds openly recruiting people to appear on live video calls while deepfake software overlays their faces with whatever persona a victim expects to see. The live call is the conversion moment, turning weeks of grooming into a wire transfer or crypto deposit, so compounds outsource this step to dedicated staff.
Humanity Research Consultancy highlights one recruitment example: a 24‑year‑old Uzbekistani, using the name “Angel,” advertising fluency in four languages and a year of experience in this work, asking for 7,000 dollars per month. Compounds in Cambodia, Myanmar, and Laos post job descriptions, manage candidate funnels, and target around 100 live video calls per day per “model.”
For financial institutions, “get on a video call to confirm they’re real” is no longer reliable safety advice. A live video call from a scam compound that uses a specialist on a deepfake overlay is not visually distinguishable from a legitimate conversation. Defenses need to move earlier in the journey — focusing on patterns of interaction, transaction velocity, payment rails, and destination rather than relying on last‑mile visual verification.
Sources: Humanity Research Consultancy
Quick Hit #3: A Bank Commits Document Fraud Against Its Regulator
This week, the Prudential Regulation Authority (PRA), part of the Bank of England, fined Bank of London and its parent company Oplyse Holdings 2 million pounds (about 2.6 million dollars) for misleading the regulator about the bank’s capital position between October 2021 and May 2024. The PRA found that the bank submitted fabricated documents that misrepresented its financial condition, failed to maintain adequate capital, and failed to act with integrity toward its supervisor.
The case sets two precedents. It is the PRA’s first finding that a firm failed to act with integrity, and its first enforcement action against a parent financial holding company. The original fine was 12 million pounds but was reduced after the PRA concluded that the full amount would push the bank into serious financial distress — a revealing detail about the firm’s current fragility.
Full notice: Bank of England
Quick Hit #4: Elder Fraud and the Need to Shift Left
At RSAC, TIAA’s head of enterprise fraud management, Rick Swenson, framed elder fraud as a timing failure. American seniors lost an estimated 4.9 billion dollars to scams in 2024, a figure widely regarded as underreported. The central issue: banks typically see the customer only when a transfer is being initiated — the very end of the scam kill chain — after weeks of behind‑the‑scenes manipulation.

“Hi, I’ll be your fraudster today…”
Swenson described an 87‑year‑old widow whose 250,000‑dollar transfer TIAA successfully blocked, only to then spend three and a half weeks working with law enforcement, Adult Protective Services, and the FBI to persuade her to cancel it. By then, the scammer’s story felt more credible to her than the bank’s warnings. His recommendation is to treat elder fraud as a distributed security problem that demands coordination with telecom providers and government agencies before the victim reaches the wire screen. TIAA is already using models to review customer phone transcripts for tightly scripted answers, which can indicate that a scammer is coaching the victim in real time.
Source: American Banker
Quick Hit #5: Lloyds App Glitch Exposes 448,000 Customers
Lloyds Banking Group confirmed that a software defect introduced during an overnight update on March 12 caused customers of Lloyds, Halifax, and Bank of Scotland to see other users’ transactions in their app. The issue exposed data for up to 447,936 people, and 114,182 customers clicked into the stray transactions, potentially viewing account numbers, payment references, and National Insurance numbers. Lloyds has paid 139,000 pounds in goodwill compensation to 3,625 affected customers.

The FCA and ICO are both engaged, and no direct financial losses have been confirmed so far. For fraud teams, the unresolved question is whether the exposed account and identity data is now circulating as raw material for social engineering and targeted attacks.
Source: The Register
This Week in Fraud is a newsletter for fintech operators, fraud teams, and risk professionals. Have a tip or story? Drop Nick Holland a note at [email protected].



