deepidv
Digital IdentityJune 2, 202615 min read
10

Identity Verification for AI Agents: The UAIIP Protocol Explained

AI agents are making financial decisions, signing contracts, and moving money — but no one can verify who deployed them. The UAIIP protocol creates the first human-to-agent identity trust chain.

AI agents are making financial decisions, signing contracts, and moving money — but no one can verify who deployed them. The Universal AI Identity Protocol creates the first human-to-agent trust chain.

In 2026, AI agents are no longer experiments. They are production systems. They execute trades on crypto exchanges. They approve loan applications. They process insurance claims. They manage investment portfolios. They send emails, schedule meetings, and authorize payments — all without human intervention for individual decisions.

These agents operate under delegated authority — a human deploys them, configures their permissions, and sets their boundaries. But when an agent executes a transaction, the receiving system has no way to answer three fundamental questions: Who deployed this agent? What authority did they grant it? And is the human behind it actually who the agent claims they are?

Existing identity systems cannot answer these questions because they were built for a world where actors are either humans (authenticated through passwords, biometrics, and documents) or software services (authenticated through API keys and OAuth tokens). AI agents are neither — they are autonomous actors that make decisions on behalf of humans, requiring a new category of identity that connects the human principal to the agent that acts on their behalf.

The Universal AI Identity Protocol (UAIIP) creates this connection.

The Trust Gap in Agentic AI

The Problem

When an AI agent initiates a wire transfer, who is liable? When an agent signs a contract, is it binding? When an agent accesses sensitive data, who authorized the access? When an agent violates a regulation, who faces the enforcement action?

The answer, under every current legal and regulatory framework, is: the human or organization that deployed the agent. The EU AI Act explicitly requires human oversight and identification of AI systems. The FATF Travel Rule requires identifying the originator of financial transfers — including agent-initiated transfers. US Executive Order 14110 mandates AI safety measures including identity and authentication standards.

But there is no protocol that connects the agent to the human in a way that is cryptographically verifiable, privacy-preserving, and regulatory-compliant. Existing approaches verify the agent (through API keys, OAuth tokens, or cryptographic keypairs) or verify the human (through KYC, biometrics, or identity documents) — but not both in a single trust chain.

This gap is where fraud, regulatory failure, and liability collapse happen. An agent with a valid API key but no verified human behind it is an autonomous actor with no accountability. A human with verified KYC but no cryptographic binding to their agent cannot prove which agents are theirs and which are not.

Existing Approaches and Their Limitations

API key authentication verifies that a request comes from something with the right key. It does not verify the identity of the human who created the key, whether the key holder is authorized to perform the requested action, or whether the key has been compromised.

OAuth tokens provide scoped access but do not establish agent identity. A token proves that an authorization was granted — not who the agent is or who controls it.

Agent identity protocols (vorim.ai, Aembit, Cisco Zero Trust) verify agents through cryptographic keypairs (Ed25519, X.509) and trust scores. They establish that the agent is a known entity with a cryptographic identity. But they do not verify the human behind the agent through biometric identity verification, they do not create a cryptographic binding between the human and the agent, and they do not use zero-knowledge proofs to protect the human's identity when the agent presents credentials.

The result is agent identity without human accountability — or human identity without agent attribution. UAIIP provides both.

Suggested read: Technology — How deepidv's Verification Engine Works

The UAIIP Architecture: Three Layers

Layer 1: Human Verification (deepidv)

The protocol begins with what deepidv already does better than any other provider: verifying the human. The human principal — the person who will deploy and control the AI agent — completes identity verification through deepidv's 5-layer verification stack.

Document authentication confirms the human's identity document is genuine through forensic analysis (FFT, ELA, noise residuals, template matching, and NFC chip verification where available). Biometric matching confirms the human matches the document photo. Deepfake detection (including injection attack detection) confirms the biometric is not synthetic. FaceX TripleLock encryption protects the biometric data with three-party AES-256-GCM encryption — no single key holder (not deepidv, not the platform, not the human) can access the biometric data alone.

The output of Layer 1 is a verified human identity with the highest assurance level available — the same verification used by financial institutions, crypto exchanges, and regulated platforms worldwide.

Layer 2: Agent Binding (DID + Cryptographic Binding)

With the human verified, UAIIP creates a cryptographic binding between the human and their AI agent. The agent receives a Decentralized Identifier (DID) — a globally unique, cryptographically verifiable identifier that is controlled by the human principal.

The DID document specifies the agent's identity (a unique identifier on the blockchain), the human principal's verified credential (linked to the Layer 1 verification, but without exposing the human's personal data), the agent's permissions (what the agent is authorized to do — transaction limits, API scopes, time boundaries), the delegation chain (who authorized the agent, under what authority, with what constraints), and revocation status (whether the human has revoked the agent's authority).

The binding is cryptographic — the agent's DID is signed with a key that is derived from the human's verified credential. Forging the binding requires forging the human's verification, which requires defeating deepidv's 5-layer detection stack — including biometric verification, deepfake detection, and injection attack detection.

Layer 3: Zero-Knowledge Attestation

When the agent acts — initiating a transaction, accessing a service, or presenting credentials to a relying party — it presents a ZK attestation. The attestation proves three things without revealing any underlying data.

The agent's human principal is verified (the human passed deepidv's identity verification). The agent's permissions are valid (the agent is authorized to perform the requested action). The agent's authority is current (the delegation has not been revoked or expired).

The relying party receives a YES or NO for each claim. It never receives the human's name, birthdate, biometric data, or identity document. It never learns which specific human is behind the agent. It learns only that a verified human exists, has authorized this agent, and has not revoked that authorization.

This privacy-preserving approach satisfies regulatory requirements for human oversight and attribution (the human is verifiably behind the agent) while protecting the human's personal data from exposure across every system the agent interacts with.

Ready to get started?

Start verifying identities in minutes. No sandbox, no waiting.

Get Started Free

Comparison: UAIIP vs Existing Protocols

CapabilityAPI KeysOAuthvorim.aiCisco Zero TrustUAIIP
Agent identityNoPartialYesYesYes
Human principal verificationNoNoNoNoYes
Biometric binding to humanNoNoNoNoYes
Deepfake detection at bindingNoNoNoNoYes
Zero-knowledge attestationsNoNoNoNoYes
Cross-chain portableNoNoNoNoYes
FATF/MiCA/BSA complianceNoNoPartialPartialYes
Trust scoringNoNoYesYesYes
On-chain attestation (SBT)NoNoNoNoYes
Immutable audit trailNoNoNoNoYes

The differentiator is not any single capability — it is the trust chain. UAIIP is the only protocol that verifies the human, binds the human to the agent, and proves the binding without exposing identity data. Every other approach addresses one or two of these requirements but not all three.

Regulatory Coverage

EU AI Act

The EU AI Act requires that AI systems be identifiable and that human oversight be maintained. UAIIP's human-to-agent binding satisfies both requirements — the agent is identifiable through its DID, and the human principal is verifiably behind the agent through the cryptographic binding.

FATF Travel Rule

The FATF Travel Rule requires identifying the originator of financial transfers. For agent-initiated transfers, the originator is the human principal. UAIIP's ZK attestation proves the originator is verified without exposing their identity to every intermediary in the transfer chain.

MiCA

For CASPs that deploy AI agents for trading, compliance monitoring, or customer service, MiCA requires that automated systems be subject to human oversight. UAIIP's binding creates a verifiable link between the CASP's compliance officer and the agents operating under their authority.

GENIUS Act

For stablecoin issuers deploying agents for reserve management, transaction processing, or compliance monitoring, the GENIUS Act's BSA compliance requirements (per the April 2026 FinCEN/OFAC NPRM) extend to agent-initiated activities. UAIIP provides the attribution chain that connects agent activity to a verified human responsible for compliance.

Integration Paths

UAIIP integrates through five paths, covering the major development environments where AI agents operate.

MCP Server — For Claude, Cursor, VS Code, and any MCP-compatible AI client.

TypeScript SDK — For Node.js applications.

Python SDK — For Python applications.

REST API — For direct integration from any platform.

Solidity Contracts — For on-chain attestation verification on Base L2. ERC-5484 Soulbound Token implementation.

UAIIP Protocol FAQ

What is UAIIP?

The Universal AI Identity Protocol — it verifies the human behind an AI agent and issues ZK attestations that prove the agent's authority without exposing the human's identity.

How is this different from API key authentication?

API keys verify the software. UAIIP verifies the human who deployed it and creates a cryptographic trust chain between them. Regulators don't audit code — they audit who deployed it.

What is the FaceX TripleLock connection?

FaceX TripleLock provides three-party AES-256-GCM encryption of biometric data in the human verification layer. No single key holder — not deepidv, not the platform, not the human — can access biometric data alone.

Which regulations does UAIIP satisfy?

EU AI Act (agent identification + human oversight), FATF Travel Rule (agent-initiated transfer attribution), MiCA (automated system oversight), GENIUS Act (agent transaction compliance), and GDPR (data minimization via ZK).

Is UAIIP open source?

The protocol spec is public. The SDKs are open source. The verification engine (deepidv) that powers the human verification layer is proprietary.

Book a demo to see UAIIP's human-to-agent trust chain running end-to-end.

Start verifying identities today

Go live in minutes. No sandbox required, no hidden fees.

Related Articles

All articles

Securing Student Identity in Remote and Hybrid Education

As remote and hybrid learning become permanent fixtures, educational institutions face a growing challenge: how do you verify that students are who they say they are?

Jan 26, 20267 min
Read more

Why EdTech Platforms Need Identity-Gated Access Control

Credential fraud and account sharing are undermining the value of online education. Identity-gated access control protects institutions, students, and employers alike.

Feb 7, 20267 min
Read more

Humanizing Digital Onboarding: Why Trust Still Requires a Human Touch

Automation handles 90% of verifications perfectly. But the other 10% — edge cases, accessibility needs, cultural nuances — require human judgment. Here is how to build verification that is both efficient and humane.

Jan 30, 20267 min
Read more