False positives, where a system mistakenly flags a person or entity as high-risk, are not only an operational headache. In the era of the EU AI Act, GDPR, and equivalent global frameworks, they can create serious legal exposure. This article explores the risks, regulatory obligations, and safeguards that compliance teams must consider when deploying AI in adverse media screening.
The Nature of the Risk: More Than Just Inconvenience
A false positive in adverse media screening isn’t merely a benign error. It may lead to:
- Reputational harm for the subject flagged
- Wrongful denial of services or financial exclusion
- Regulatory breaches due to unlawful profiling
- Litigation under defamation, data protection, or anti-discrimination laws
If, for example, an individual is incorrectly linked to a crime due to name similarity in a poorly contextualised article, the ramifications can include frozen accounts, terminated contracts, or being added to an internal watchlist. These consequences raise not only ethical questions but legal obligations under emerging AI governance regimes.
The EU AI Act: High-Risk Classification and Compliance Duties
Under the EU AI Act specifically Articles 6, 10–15, and Annex III, systems used for evaluating creditworthiness, biometric ID and some aspects of KYC due diligence can be classified as “high-risk.” This designation triggers specific requirements for:
- Accuracy and robustness: Systems must minimise errors and false positives.
- Transparency: Users must be informed they are subject to an AI decision.
- Human oversight: There must be meaningful human review and override options.
- Auditability: Full logs must be kept for tracing how a decision was made.
Providers and deployers of such systems, especially financial institutions, must ensure that they can explain the rationale behind every flagged alert. Black-box models with no audit trail or internal logic are likely to be non-compliant.
GDPR and Profiling: Consent, Accuracy and the Right to Explanation
Even before the EU AI Act comes into force, the General Data Protection Regulation (GDPR) already imposes duties that bite when AI is used for profiling individuals. Article 22 GDPR restricts solely automated decisions that produce legal or similarly significant effects unless specific conditions are met.
In the context of false positives from adverse media screening, several GDPR principles are at play:
- Accuracy (Article 5): Personal data must be accurate and kept up to date. Flagging someone based on outdated or misleading information can violate this principle.
- Data minimisation: Collecting speculative or irrelevant information, such as tagging an individual for something unrelated to them, may be unlawful.
- Right to rectification and objection: Individuals have a right to correct inaccuracies and object to automated processing.
- Right to explanation (Recital 71): While not absolute, there is a growing expectation that decisions be explainable, particularly in regulated sectors.
Organisations using AI-based KYC tools must therefore implement human-in-the-loop safeguards, robust data governance, and procedures for handling complaints or challenges to flagged results.
Sources of Error: From Bad Data to Bad Models
False positives can stem from several weak points in the screening pipeline:
- Name ambiguity: Shared or similar names (e.g., John Smith vs. Jon Smythe) cause mismatches.
- Lack of disambiguation: Systems may fail to verify if the article is truly about the person in question.
- Language and context errors: Translations or culturally specific terms can distort meaning.
- Sensational media: Some news stories are speculative, unverified, or slanted, yet still processed as factual.
- Bias in training data: AI models trained on skewed or unbalanced datasets can over-flag certain regions, ethnicities or professions.
The implication is clear: automation bias, the tendency to trust machine outputs, can amplify risks unless decision-makers remain critical and involved.
Redress and Explainability: Designing for Accountability
So how can organisations mitigate the legal risks of false positives?
1. Maintain an Explainability Layer
Any flagged result must be traceable to its source (e.g., the specific media snippet), with natural language explanations of why it was flagged, and which entity attributes triggered the match. Models should generate interpretable risk summaries, not cryptic classifications.
2. Log Everything for Auditability
Regulators will expect full audit trails: which sources were searched, which model versions were used, how match scores were computed, and what user actions followed. Immutable logs support both internal reviews and external investigations.
3. Create a Disposition Workflow
Flagged alerts should go through a workflow that allows human users to:
- Confirm or reject the match
- Classify the risk level
- Record decisions and rationale
This human-in-the-loop architecture is crucial for legal defensibility.
4. Offer Subject Rights Management
Provide mechanisms for flagged individuals to:
- Request a copy of the data held
- Challenge inaccurate or misleading inferences
- Seek erasure of irrelevant data
Even when you have a lawful basis to process data under compliance exemptions, respecting data subject rights enhances reputational trust and reduces litigation risk.
Compliance Requires More Than Accuracy
Adverse media screening using AI is here to stay. But as regulatory frameworks like the EU AI Act tighten expectations around transparency, auditability and fairness, compliance teams must rethink their due diligence architecture.
Accuracy alone isn’t enough. Institutions need to design systems that respect the rights of individuals, document their decisions, and allow for challenge and correction. False positives, once seen as a tolerable trade-off in compliance, are now a compliance failure in themselves if left unmitigated.
The future of AI in compliance depends not just on technical power, but on legal responsibility and ethical design. In this landscape, organisations that embed explainability and redress into their systems will not only avoid penalties, but they will also earn trust.
About smartKYC
smartKYC is the leading provider of AI-driven KYC risk screening solutions, serving financial institutions and multinational corporations worldwide. By combining artificial intelligence, linguistic and cultural sensitivity, and deep domain knowledge, smartKYC sets new standards for KYC quality, transforms productivity, and ensures compliance conformance.
To see smartKYC in action, please schedule a demo.


