deepidv
Back to News
The Deep Brief · May 4, 2026 · 6 min read

EU AI Act Hits 90 Days: What High-Risk Financial Systems Must Do Before August 2

The EU AI Act high-risk financial system deadline is August 2, 2026. Here is what classification means, what compliance requires, and what penalties firms face.

Rosalie Chirip
Rosalie Chirip
Senior Editor at deepidv
European Parliament building in Brussels representing the EU AI Act high-risk financial systems compliance deadline

The EU AI Act's compliance date for high-risk AI systems in the financial sector is August 2, 2026. Ninety days. Penalties for non-compliance reach 35 million euros or 7 percent of worldwide turnover for prohibited practices, 15 million euros or 3 percent for other infringements, and apply to non-EU firms offering AI systems into the European market. Most banks and fintechs operating across EU jurisdictions have known about the deadline since 2024. A surprising number have not yet completed conformity assessments, finalized technical documentation, or registered their high-risk systems in the EU database.

The reason is rarely awareness. It is classification. The threshold question for any AI system in financial services is whether it qualifies as "high-risk" under Annex III of Regulation (EU) 2024/1689, and the answer is rarely binary. Credit scoring is unambiguously high-risk. Fraud detection is contested. Behavioural analytics that informs a credit decision is high-risk by indirect application even when the analytics system itself was not designed for credit. The classification analysis is the longest pole.

What "high-risk" means under the AI Act

The AI Act takes a graduated, risk-oriented approach. Systems that pose unacceptable risks are prohibited. Systems that pose high risks are permitted subject to strict obligations. Limited-risk systems face transparency requirements only. Minimal-risk systems are unregulated.

For financial services, high-risk classification typically applies to AI systems used for creditworthiness assessment, credit scoring, fraud detection in regulated transactions, and biometric identification or categorization. The classification turns on intended use, not just technical architecture. A general-purpose foundation model deployed in a credit decision pipeline becomes a high-risk system through the deployment, even if the underlying model is not itself high-risk.

The obligations on high-risk systems are substantial. Providers must implement risk management throughout the lifecycle. Training and test data must meet quality criteria, including representativeness and absence of bias. Technical documentation must be complete and current. Logging must be automatic and comprehensive enough to allow traceability. Human oversight must be designed into the system, not bolted on. Accuracy, robustness, and cybersecurity must be assessed and demonstrated.

Conformity assessment is required before the system is placed on the market. CE marking must be affixed. Registration in the EU database for high-risk AI systems must be complete. Post-market monitoring must continue indefinitely.

Why financial services is the test case

The European Commission deliberately staggered the AI Act timeline to give the financial sector time to adapt. Prohibitions and AI literacy obligations took effect in February 2025. Governance provisions and general-purpose model obligations followed in August 2025. High-risk financial systems have until August 2, 2026. The remaining provisions apply by August 2027.

The phasing reflects the regulatory reality. Financial services already operates under explanation-of-decision requirements through GDPR Article 22, model risk management under ECB and EBA guidance, and outcome-fairness obligations under various consumer credit directives. The AI Act layers on top of these, but it does not replicate them. It introduces new requirements for technical documentation depth, training data governance, and post-market monitoring that existing financial services regulation does not capture.

The result is a compliance overlay rather than a compliance replacement. Banks and fintechs cannot satisfy the AI Act simply by pointing to their model risk management framework. They need a parallel evidentiary record built specifically for AI Act conformity assessment.

The Digital Omnibus complication

In November 2025, the European Commission introduced the Digital Omnibus proposal, an attempt to harmonize the AI Act with GDPR, NIS 2, DORA, and the Data Act. The proposal would consolidate incident reporting into a single point, align breach notification thresholds, and clarify the use of personal data in AI for creditworthiness assessments.

The Digital Omnibus is currently under the ordinary legislative procedure. It will not be finalized before August 2, 2026. Firms that delay AI Act compliance preparation in anticipation of Digital Omnibus simplification will miss the deadline. The safe approach is to comply with the AI Act as it stands and absorb downstream simplification when it arrives.

That said, the Digital Omnibus signals one concrete change in regulatory direction. Authorities recognize the compliance burden of overlapping incident reporting regimes and are working toward consolidation. Firms designing AI Act incident response procedures should design for eventual interoperability with DORA and NIS 2 reporting, even if the formal harmonization is still in negotiation.

What auditors will look for

EU regulators have signaled that "black box" AI is not acceptable in financial crime compliance. Every AI-driven decision in a KYC or AML workflow must produce a reason code, an audit trail, and a human-readable explanation of why the outcome was reached. The FATF Plenary of October 2025 formally approved an AI Horizon Scan covering generative and agentic AI in financial crime, and AI-specific guidance is expected through 2026 and 2027.

For compliance technology architecture, that translates into a small number of concrete requirements. Every AI-generated decision must be reproducible from preserved inputs and model versioning. Every decision must have a logged explanation that is meaningful to a non-technical reviewer. Every model used in a high-risk decision must have current technical documentation including data lineage, training methodology, and validation results. Every deployment must be registered, dated, and version-controlled.

Firms that operate transaction monitoring, sanctions screening, fraud detection, or credit decisioning under AI today should expect supervisory examiners to request precisely this documentation in 2026 examinations.

What 90 days looks like as a project plan

The realistic project plan for a firm that has not yet completed AI Act conformity for its high-risk systems is tight but feasible. Week one is classification. Every AI system in production gets categorized against Annex III. Borderline cases are flagged for legal review. The output is a definitive list of high-risk systems requiring conformity.

Weeks two through four are documentation. For each high-risk system, the firm assembles technical documentation per Annex IV: intended purpose, data governance documentation, training methodology, validation results, risk management procedures, human oversight design, and post-market monitoring plan. Most of this exists in fragmentary form across model risk management, vendor due diligence, and product documentation. The work is consolidation, not creation.

Weeks five through eight are gap remediation. Documentation gaps get filled. Logging that does not meet automatic-logging requirements gets upgraded. Human oversight design gets reviewed against actual operational practice and corrected where they diverge. Post-market monitoring procedures get formalized.

Weeks nine through twelve are conformity assessment, CE marking, and EU database registration. For systems requiring third-party conformity assessment, this window is too tight. For systems eligible for self-assessment, which is the majority in financial services, the timeline is feasible if documentation is in good order.

Firms that have not yet started classification should start this week. Firms that completed classification in 2025 and have parked the project should restart it now. The deadline is hard.

EU AI Act FAQ

What is the EU AI Act high-risk financial deadline?
The EU AI Act compliance date for high-risk AI systems in the financial sector is August 2, 2026. By that date, providers and deployers must complete conformity assessments, finalize technical documentation, affix CE marking where required, and register high-risk systems in the EU database. Non-compliance penalties reach 35 million euros or 7 percent of worldwide turnover for prohibited practices.
Which financial AI systems are classified as high-risk?
High-risk classification under Annex III typically applies to AI systems used for creditworthiness assessment, credit scoring, biometric identification, biometric categorization, and certain fraud detection use cases. The classification turns on intended use rather than technical architecture, so general-purpose models deployed in regulated decisions inherit high-risk status through deployment.
Does AI Act compliance replace existing model risk management requirements?
No. The AI Act layers on top of existing financial services regulation, including GDPR Article 22, ECB and EBA model risk guidance, and consumer credit directives. Firms cannot satisfy the AI Act by pointing to their existing model risk management framework. They need a parallel evidentiary record built specifically for AI Act conformity assessment.
What does explainability mean in practice for AI Act compliance?
Every AI-generated decision in a high-risk system must be reproducible from preserved inputs and model versioning, must have a logged explanation that is meaningful to a non-technical reviewer, and must be supported by current technical documentation including data lineage, training methodology, and validation results. "Black box" AI is not acceptable in financial crime compliance.
How does the Digital Omnibus proposal affect AI Act compliance?
The Digital Omnibus is a November 2025 proposal to harmonize AI Act, GDPR, NIS 2, DORA, and Data Act requirements. It is under legislative procedure and will not be finalized before August 2, 2026. Firms should comply with the AI Act as it stands and absorb any downstream simplification later, while designing incident response procedures for eventual interoperability with DORA and NIS 2 reporting.
TagsAdvancedArticleRegulatory ComplianceAI & TrustFinTechBankingEU

Relevant Articles

What is deepidv?

Not everyone loves compliance — but we do. deepidv is the AI-native verification engine and agentic compliance suite built from scratch. No third-party APIs, no legacy stack. We verify users across 211+ countries in under 150 milliseconds, catch deepfakes that liveness checks miss, and let honest users through while keeping bad actors out.

Learn More