FB pixel

Under AMLA, 95% false positives become a regulator’s problem

Under AMLA, 95% false positives become a regulator’s problem
 

By Max Irwin, Regional Vice President EU, Shufti

By the end of the day on 22 April 2026, around forty European credit and financial institutions will have submitted their anti-money-laundering data to the EU’s new Anti-Money Laundering Authority (AMLA). The data AMLA receives in Frankfurt this week will calibrate the risk-assessment models that decide which firms fall under AMLA’s direct supervision from 2028.

Most of the commentary around AMLA has focused on jurisdictional questions of who regulates whom, how national supervisors hand off authority, and what happens to existing sanctions enforcement. Those questions matter. But they obscure a more immediate one, which is: what is AMLA actually measuring?

The answer, from the methodology AMLA has published and the calibration exercises it has run with national authorities, is not reassuring for firms that have treated AML screening as a volume problem.

The end of the volume defence

For the last decade, “good” AML screening has quietly meant “a lot of screening.” Larger watchlists. More enrichment providers. More transactions reviewed, more alerts raised, more reports filed. The industry’s dirty secret benchmark, 95% false positive rates, has been tolerated as an unavoidable cost of coverage. If regulators asked why an alert fired, the defensible answer was “the name matched.”

AMLA’s supervision model flips that question. Under direct EU supervision, the test is not whether an alert fired, but whether it was reasonable for it to fire and whether the firm can demonstrate defensible reasoning for escalation or dismissal. A 95% noise floor is not a resource constraint under that model. It is a control deficiency.

This matters because the firms AMLA will supervise directly are the firms most exposed to the jurisdictional complexity that makes contextual judgment non-optional. These are institutions operating in multiple sanctions regimes simultaneously, with customer bases that include politically exposed persons, cross-border corporate structures, and emerging-market counterparties. A screening engine that returns a hundred name-matches for a customer whose risk profile warrants three is not a risk-management tool. It is a compliance liability.

What contextual screening actually requires

The phrase “contextual screening” gets used loosely, often as a marketing descriptor rather than a technical specification. Let me be precise.

Defensible AML decisioning under AMLA-style supervision requires three capabilities working together, not in sequence:

First, integrated signal coverage. Name-match alone is insufficient because sanctions designations, politically-exposed-person classifications, and adverse-media indicators each carry different evidentiary weight and require different remediation pathways. A screening engine that treats these as separate products with separate APIs and separate teams is one that guarantees fragmented judgment.

Second, contextual risk scoring rather than binary matching. An alert should carry a weighted judgment about whether the match is risk-relevant for the specific customer, jurisdiction, and transaction type,  not simply whether the strings align. Most first-generation screening engines produce binary output (“match” / “no match”) and leave contextual weighting to the analyst. AMLA’s methodology and the direction of national supervisors applying it favour engines that produce defensible scoring at the point of the alert.

Third, auditable reasoning. Every alert, every dismissal, and every escalation must generate a reasoning trail that a regulator can read without assistance from the firm’s internal team. This is the point where “AI-powered” AML offerings quietly fail in review. A model that cannot explain why it fired or dismissed an alert is not a compliance tool. It is an unverified automation layer sitting between the firm and its regulator.

The question every MLRO should ask their provider this week

AMLA’s direct supervision regime does not begin until 2028. But the data call that closed last week is the first hard test of the models that will do the selecting. Firms whose submitted data reveals high noise floors, inconsistent escalation logic, or reasoning gaps will not have two years of warning. The selection criteria are being calibrated right now.

For Money Laundering Reporting Officers (MLRO) and Chief Compliance Officers, the practical implication is straightforward. Ask your current screening provider one question: what percentage of alerts we raise are false positives, and can you explain the reasoning for any alert we select? If the answer is “we don’t track that” or “the reasoning is in the name match,” that is not a neutral answer. It is the gap AMLA is modelling.

The firms that move first on contextual, integrated, auditable AML screening will not just avoid becoming AMLA’s early case studies. They will have the cleanest story to tell when direct supervision begins and the cleanest systems to hand to a regulator who is going to ask different questions than national authorities have asked for the last twenty years.

About the author

Max Irwin is Regional Vice President EU at Shufti, a global identity verification and compliance company. Shufti’s Agentic Contextual Screening platform serves obliged entities across the EU and supports AMLA-ready AML compliance across 215+ sanctions regimes, 2.6M PEP profiles, and 415+ adverse-media categories.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Deepfake threats exploiting the trust inside corporate systems

New York-based AI security company Reality Defender is warning businesses that deepfake threats have moved beyond isolated fraud schemes and…

 

Sri Lanka defines trust boundaries ahead of digital ID rollout

Sri Lanka’s Unique Digital ID (SL-UDI project is placing trust architecture at the center of its rollout, with officials emphasizing…

 

Biometrics demand holds firm across core and emerging use cases

A UK court ruling that live facial recognition use by police does not violate human rights could have major implications…

 

ADVP and NO2ID back DVS framework from opposing perspectives

The UK’s Digital Verification Service (DVS) trust framework is drawing support from both industry and long-time critics of centralized identity…

 

IATA digital ID trial shows interoperability across countries, wallets and biometrics

A test of IATA’s face biometrics-based digital identity for air travel for a journey beginning with Japan Airlines (JAL) at…

 

Netherlands weighs data sovereignty concerns with Solvinity digital identity contract

In the Netherlands a government contract is placing sovereignty and national digital ID at the front of political conversation. The…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events