deepidv
Back to Playbooks
The Deep Brief · Curated Playbook · Global · Apr 29, 2026 · 14 min read

The AI Verification Playbook: Seven Surfaces Every Compliance Team Needs to Cover

Compliance was built for human fraud. The agentic era added six more attack surfaces. Here is the playbook for covering all of them.

The AI Verification Playbook cover — The Deep Brief
Curated Playbook
14 min read · Intermediate · Global

Full name + work email required. We'll email you a copy.

Every compliance program built before 2024 assumed the same threat model: a human applicant, a human transaction, a human document. That assumption is now wrong on every dimension. Agents act. AI generates documents. Synthetic media impersonates real people. The compliance stack has to grow seven new surfaces almost overnight, and most teams are covering one or two.

This playbook walks through each of the seven surfaces, the threat patterns hitting them in 2026, and the verification controls that work. Every section maps to a deepidv product so you can build the program against actual deployable capability, not theory.

1. Agent identity at the MCP boundary

Agents now reach servers through the Model Context Protocol. Every MCP server accepts connections from agents claiming any identity, with no protocol-level proof. That gap is where the next decade of fraud lives.

The threat pattern. An agent presents itself as `agent-from-stripe` or `customer-finance-agent`. The MCP server has no way to validate the claim. The agent runs queries, requests transfers, or pulls customer data based on a string in a header.

The control. Verify agent identity, intent, and authorization at the gateway before the request reaches your server. Track behavioral fingerprint across sessions. Detect intent drift. Enforce scope at the action level.

2. AI-generated documents in your pipeline

Paystubs, W-2s, government IDs, receipts, contracts, and medical records are all being generated by AI tools at near-perfect visual fidelity. Underwriting, claims, and HR teams are approving applications backed by ChatGPT.

The control. Forensic analysis at five layers: PDF metadata, visual artifacts, content cross-reference, issuer validation, and generator attribution. Sub-500ms decision per document with full evidence chain.

3. AI-generated content in user uploads

Social platforms, marketplaces, dating apps, news organizations, and creator platforms are receiving AI-generated images, video, and audio at scale. C2PA Content Credentials are emerging as the provenance standard. Most platforms are not yet validating them.

The control. Detect AI-generated content at the upload boundary across image, video, and audio modalities. Validate C2PA manifests when present. Apply policy per detection outcome (label, restrict, demote, block).

4. Deepfake video at contact surfaces

Customer service callbacks, video conferencing, KYC liveness, and executive comms are all under deepfake attack. The 2026 generation of video models passes liveness checks that were sufficient in 2023.

The control. Six-layer deepfake detection running at sub-200ms in the call, conference, or KYC flow. Frame-level forensics plus motion analysis plus generator attribution.

5. Voice clones on the phone line

Voice clones from ElevenLabs and emerging synthesizers now pass speaker verification for most enrolled voiceprint systems. Customer service phone-based authentication is the largest open attack surface in financial services.

The control. Real-time voice clone detection at the call boundary. Spectral analysis, prosody fingerprinting, and generator attribution running in under 300ms.

6. AI-aided hiring fraud

North Korean IT worker fraud, AI-generated resumes, deepfake interviews, and identity swaps at Day 1 onboarding are now the leading hiring fraud vectors. The FBI and Treasury have issued repeated advisories. Reported losses exceed $400M annually.

The control. AI resume forensics at application, deepfake detection during interview, biometric continuity from application through Day 1, and continuous post-hire identity verification.

7. Synthetic identity at onboarding

Synthetic identity fraud now accounts for $40B+ in annual US losses. Real SSNs paired with fabricated names. Aged 12 to 24 months. Bust out at credit limit.

The control. Synthetic identity scoring at onboarding combining SSN-name pairing analysis, behavioral fingerprint, device intelligence, and network reputation. Continuous monitoring for aged-bust-out signatures.

The seven-surface program

Most compliance teams cover one or two surfaces well. The 2026 program covers all seven. The deepidv platform was built for exactly this scope. The shift is not about adding more vendors. It is about consolidating onto a platform that treats AI verification as one program with seven surfaces.

AI Verification Playbook FAQ

What is AI verification?
AI verification is the umbrella term for verifying anything AI creates or operates. That includes autonomous agents acting on a user's behalf, AI-generated documents in financial workflows, AI-generated content uploaded to platforms, deepfake media in calls and customer service, AI-aided hiring fraud, and AI-aided synthetic identity at onboarding.
How is this different from traditional identity verification?
Traditional identity verification handles humans. AI verification adds the six surfaces traditional KYC does not cover: agents, documents generated by AI, content generated by AI, deepfake media, AI-aided hiring, and AI-aided synthetic identity. Same platform principle. Expanded scope.
Do I need to deploy all seven controls at once?
No. Most enterprise customers start with the surfaces causing the most current loss (typically deepfakes at customer service, AI documents at underwriting, and synthetic identity at onboarding) and expand to the others over six to twelve months.
What is the cost model for the full program?
Pricing is unified through a credit system across all seven surfaces. Starter at $299 a month covers small teams. Growth at $1,499 covers mid-market. Scale at $5,999 covers enterprise. Custom pricing available for annual commits at higher volume.
How does this align with regulatory expectations?
Every control returns examination-ready documentation aligned to FFIEC, OCC SR 11-7, and EU AI Act formats. Audit trails are immutable and exportable. Model risk management documentation auto-generates per decision.
What about model fairness and bias?
Detection models are bias-tested per the Uniform Guidelines on Employee Selection Procedures and the EU AI Act. Disparate impact analysis ships with every model release. Continuous monitoring detects fairness drift.
TagsIntermediatePlaybookAI VerificationComplianceMCPSynthetic IdentityDeepfake DetectionC2PA

Relevant Articles

What is deepidv?

Not everyone loves compliance — but we do. deepidv is the AI-native verification engine and agentic compliance suite built from scratch. No third-party APIs, no legacy stack. We verify users across 211+ countries in under 150 milliseconds, catch deepfakes that liveness checks miss, and let honest users through while keeping bad actors out.

Learn More