🔍🤖⚠️ DEEPFAKE DIGEST

DEEPFAIC THREAT INTELLIGENCE - 31 MARCH 2026 - 06:00 LOCAL

Deepfake · Social Engineering · AI-Generated Media Threats

🔴 THREAT LEVEL: HIGH

Deepfake-enabled fraud is operating at industrial scale across financial, political, and social channels, with scam compounds deploying real-time AI face-swapping, organized crime networks weaponising voice cloning, and AI-generated political disinformation accelerating ahead of the 2026 US midterms.

🚨 ACTIVE ATTACKS & INCIDENTS - Last 24-48 Hours

🏭 SCAM COMPOUNDS NOW HIRING LIVE "AI MODELS" FOR REAL-TIME DEEPFAKE VIDEO CALLS

Southeast Asian scam operations have upgraded their playbook: when a victim requests a video call, scam bosses deploy specialist "AI models" - real people hired to appear on screen while live deepfake software alters their appearance in real time to match the fictional persona the victim expects to see. Recruitment ads describe roles handling around 100 live video calls per day, primarily targeting romance scam and crypto fraud victims. One recruited operator, a 24-year-old Uzbekistani, demanded $7,000 per month for her services - signalling how professionalised this attack vector has become.

📰 Malwarebytes, March 2026 · Attack Type: Romance/Crypto Fraud · Vector: Live Deepfake Video Call

📞 AI DEEPFAKE VOICE CALLS NOW HITTING 1 IN 4 AMERICANS

The State of the Call 2026 report reveals that AI deepfake voice calls have reached 1 in 4 Americans, with consumers reporting that scammers are outpacing mobile network operators 2-to-1 in their ability to defeat call authentication. The scale of the problem reflects how accessible voice cloning tools have become - attackers can generate convincing voice replicas from only a few seconds of publicly available audio harvested from social media, voicemail greetings, or brief phone calls. Common attack vectors include the family emergency scam, CEO fraud, and bypassing voice-based security systems at financial institutions.

📰 Business Wire, March 1, 2026 · Attack Type: Voice Clone Fraud · Vector: Phone/Social Engineering

💸 $25M ARUP DEEPFAKE CFO CONFERENCE FRAUD CONTINUES TO RESHAPE CORPORATE SECURITY

The case of Arup, the global engineering firm, remains a landmark in corporate deepfake fraud: a finance manager joined what appeared to be a routine multi-party video conference, saw the CFO on screen, heard colleagues speaking, and authorized multiple wire transfers totalling $25 million. Every person on the call was an AI-generated deepfake. The incident - now widely examined in the security community - is driving a fundamental rethink of video call verification protocols and identity confirmation procedures in finance and legal teams globally.

📰 Security Boulevard, March 2026 · Attack Type: BEC / CFO Fraud · Vector: Deepfake Video Conference

🌐 UN WARNS ORGANISED CRIME HAS WEAPONISED AI DEEPFAKES AND VOICE CLONING AT SCALE

The UN Office on Drugs and Crime (UNODC) issued a global wake-up call, warning that transnational organised crime networks are now operating as "criminal service providers" - developing and deploying malware, weaponising AI for deepfakes and voice cloning, and selling cybercrime capabilities as services. Scam centres in Southeast Asia are the surface layer of a deeper ecosystem involving human trafficking, corruption, and transnational money laundering. Recent raids in the Philippines and Cambodia confirmed the industrial scale of these operations, which are generating billions in illicit financial flows.

📰 UN News, March 2026 · Attack Type: Organised Fraud / DaaS · Vector: Deepfake + Voice Clone + Social Engineering

🎭 AI-GENERATED MEDIA INCIDENTS

🇺🇸 AI DEEPFAKES NOW DEPLOYED ACROSS MULTIPLE 2026 US MIDTERM CAMPAIGNS

With no federal regulation constraining the use of AI in political messaging, deepfake political ads are proliferating across the 2026 US midterm landscape. A 2025 peer-reviewed study found that people struggle to identify deepfake videos and that their political opinions are measurably affected by this type of misinformation. Only 28 states have passed legislation addressing AI in political ads, and most focus on disclosure requirements rather than outright bans - leaving a wide-open attack surface heading into November.

📰 Honolulu Star-Advertiser / AP, March 28, 2026 · Type: Political Deepfake · Platform: Multiple Social and Digital Ad Platforms

🗳️ REPUBLICANS RELEASE AI DEEPFAKE OF DEMOCRATIC CANDIDATE JAMES TALARICO

The National Republican Senatorial Committee released an online ad featuring a realistic but entirely fabricated AI-generated version of Democratic candidate James Talarico - described by analysts as the first deepfake featuring a phony version of a candidate talking in a lifelike manner for an extended duration. Experts say the growing normalisation of such content risks further eroding voter trust in political communication and supercharging the spread of disinformation about candidates and policy positions.

📰 CNN Politics, March 13, 2026 · Type: Political Deepfake Ad · Platform: Online / Social Media

💣 AI-GENERATED CONFLICT DISINFORMATION FLOODS CHANNELS AMID US-ISRAEL-IRAN TENSIONS

Generative AI is being used to manufacture fake images and videos of missile and drone attacks on cities, military bases, and ports in Israel, Iran, and Gulf states - even when such attacks never occurred. The content is designed to trigger fear, financial market volatility, and real-world escalation responses. This represents a significant evolution in the use of synthetic media as a geopolitical weapon, with AI-generated disinformation now actively shaping public perception of live conflict situations.

📰 World Geostrategic Insights, 2026 · Type: Conflict Disinformation · Platform: Social Media / Messaging Apps

📊 WEF: OVER 500,000 PIECES OF SYNTHETIC MEDIA CIRCULATE DAILY ACROSS SOCIAL PLATFORMS

The World Economic Forum's 2026 analysis places mis- and disinformation among the top short-term global risks, noting that over 500,000 pieces of synthetic media now circulate across social platforms every single day. AI-generated deepfakes have become nearly indistinguishable from authentic content at scale. The WEF warns that widespread information disorder has become a destabilising systemic force capable of disrupting democracies, eroding social cohesion, and making existing crises - from economic downturns to natural disasters - significantly worse.

📰 World Economic Forum, March 2026 · Type: Synthetic Media / Disinformation · Platform: Social Media Broadly

🏛️ POLITICAL & REGULATORY LANDSCAPE

🇺🇸 WASHINGTON STATE SIGNS NEW DEEPFAKE LAW PROTECTING DIGITAL IDENTITY RIGHTS

Governor Ferguson has signed Washington State's new deepfake law, updating personality rights to address unauthorised use of digital likenesses and AI-generated replicas. The law takes effect June 10, 2026, and is part of a broader wave of state-level action: as of early 2026, 46 US states have now enacted legislation directly targeting AI-generated media. The Washington law adds civil remedy provisions, giving individuals a clear legal pathway to pursue claims against bad actors who misuse their likeness.

📰 NBC Right Now, 2026 · Jurisdiction: Washington State, USA · Status: Signed - Effective June 10, 2026

🇺🇸 TAKE IT DOWN ACT IN EFFECT - PLATFORMS FACE 48-HOUR REMOVAL REQUIREMENT

The federal TAKE IT DOWN Act, signed into law in 2025, is now fully in effect. The law criminalises the publication of non-consensual intimate imagery, including AI-generated deepfakes, and requires online platforms to remove flagged content within 48 hours of a valid report. Schools and employers are now navigating new compliance obligations, with legal advisors noting that the law creates significant liability exposure for organisations that fail to act promptly on removal requests involving AI-generated content.

📰 Fisher Phillips, 2026 · Jurisdiction: Federal, USA · Status: In Force

🇺🇸 MINNESOTA PROPOSES SWEEPING BAN ON NUDIFICATION DEEPFAKE TOOLS

Minnesota legislators have introduced a bill that would prohibit access to websites, apps, or software designed to nudify images or videos of real people, while also banning advertising or promotion of such products. The proposal goes further than most state laws by targeting the tooling itself rather than just the resulting content, and includes civil lawsuit provisions allowing victims to pursue claims against creators. The bill reflects a growing legislative trend toward targeting the deepfake supply chain rather than solely addressing downstream harms.

📰 Fox 9 Minneapolis, March 2026 · Jurisdiction: Minnesota, USA · Status: Proposed / In Committee

🇪🇺 EU AI ACT DEEPFAKE DISCLOSURE RULES ENFORCEABLE FROM AUGUST 2026

The EU AI Act's requirements for mandatory labelling of AI-generated and deepfake content, along with disclosure of synthetic interactions, become enforceable in August 2026. Organisations in breach face fines of up to 6% of global annual revenue. This is expected to have significant knock-on effects for global platforms and media companies that operate in EU markets, forcing rapid investment in content authentication and provenance systems to demonstrate compliance ahead of the enforcement deadline.

📰 Ondato, 2026 · Jurisdiction: European Union · Status: Enforcement Begins August 2026

🧠 INTELLIGENCE SUMMARY

🔺 The deepfake attack surface has expanded from high-value targets to mass-scale operations. The emergence of scam compounds running hundreds of live AI-augmented video calls per day marks a critical inflection point. What was once limited to sophisticated, targeted attacks is now a commoditised, industrial-scale fraud capability accessible to mid-tier criminal organisations with modest budgets.

🔺 Video calls can no longer serve as identity verification. The $25M Arup fraud and the scam compound AI model playbook together demonstrate that real-time deepfake tooling has eliminated the evidentiary value of a video call. Organisations relying on visual confirmation for financial authorisation or identity verification are operating with a fundamentally broken control.

🔺 Political deepfakes are normalising without effective guardrails. The deployment of fabricated candidate videos in the 2026 midterms - with no federal prohibition and a fractured patchwork of state disclosure laws - signals that AI-generated electoral disinformation is entering the mainstream. The research evidence that voter opinion is measurably shifted by deepfake exposure makes this a systemic democratic risk, not just a reputational one.

🔺 Synthetic identity fraud is outpacing traditional KYC defences. AI-driven tools now enable the creation of thousands of convincing synthetic identities at low cost, with US lenders facing over $3.3 billion in exposure to synthetic-identity-linked new accounts. Fraudsters are exploiting the gap between the speed of identity creation and the latency of financial institution fraud detection systems.

🔺 Regulatory momentum is building but enforcement lags the threat. The EU AI Act enforcement deadline (August 2026), the TAKE IT DOWN Act, and 46 states with deepfake legislation represent meaningful progress. However, the UN warning that organised crime is now selling deepfake capabilities as a service means the threat is scaling faster than any single jurisdiction can address through domestic law alone.

👁️ WATCH LIST - #1 EMERGING RISK TODAY

Real-Time Live Deepfake Video as the New Standard Attack Vector for High-Value Fraud. The scam compound AI model approach has cracked the final barrier in deepfake-enabled fraud: convincing live interaction. Until recently, deepfakes were limited to pre-recorded clips or low-quality real-time feeds that broke down under scrutiny. The combination of a real human operator with AI-enhanced appearance, running dozens of calls per day, removes that weakness entirely. As the tooling becomes cheaper and more accessible, this attack pattern will migrate from Southeast Asian scam compounds to domestic fraud operations, BEC campaigns, and KYC bypass attacks on regulated financial institutions. Organisations should treat any video call involving financial authorisation, identity confirmation, or sensitive data exchange as potentially compromised - and implement out-of-band verification as a mandatory control.

Deepfaic Threat Intelligence · [email protected] · 31 March 2026 · deepfaic.com

Keep Reading