Real or AI? Decided at upload.
AI-generated image, video, and audio detection for user uploads, with C2PA-aligned authenticity at platform scale.
platform SDK
Upload authenticity queue
12,481 uploads today
creator-post-481
image
marketplace-demo
video
voice-note-17
audio
Detect
Manifest
Display
Live decision
AI label required
Latency
<300ms
Action
Label
AI image detection accuracy
AI video detection accuracy
AI audio detection accuracy
Decision time per upload
Three modalities. Platform-grade authenticity.
Run detection and provenance checks before user content becomes platform content.
AI-Generated Images
Detect images from Midjourney, DALL-E, Stable Diffusion, Flux, and emerging diffusion models.
AI-Generated Video
Detect Sora, Runway, Pika, and Veo-generated video at frame and motion level.
AI-Generated Audio
Detect ElevenLabs, OpenAI TTS, and emerging voice synthesis models in audio uploads.
Edited Real Content
Detect modifications to real content: face swaps, voice clones overlaid on real video, and content splicing.
C2PA Manifest Validation
Validate C2PA Content Credentials manifests on uploaded content and surface trust signals in your UI.
Provenance Chain
Track lineage across re-uploads, re-encodings, and re-shares. Detect chain breaks signaling tampering.
Platform-Wide Reporting
Dashboard showing AI-generated content rate, top generators detected, and policy enforcement metrics.
Bulk Backfill
Scan existing content libraries for AI-generated material and tag retroactively at scale.
Configurable Policy
Define what happens per detection outcome: label, restrict, demote, or block.
Creator Verification
Optional creator identity verification flow at content upload. Bind authenticity to a verified creator.
Upload. Detect. Verify. Display.
Keep platform trust decisions close to the upload boundary, before synthetic content spreads.
User Upload
Content uploads to your platform. SDK intercepts and calls deepidv before content goes live.
AI Detection
All three modalities are scored: image, video, audio. Combined verdict returned with per-modality confidence.
C2PA Manifest Check
If a C2PA Content Credentials manifest is present, it is validated against issuer signatures and lineage.
Trust Signal
Trust signal returned: AI-generated, edited, authentic real, or unverifiable, with confidence interval and rationale.
Platform Display
Your platform displays the appropriate label, badge, or content treatment based on the trust signal.
The output is ready for product, policy, and trust teams.
Each decision carries enough context to label content, enforce policy, and report on platform risk.
Separate image, video, and audio scores plus a combined authenticity decision.
Manifest presence, issuer validation, lineage continuity, and mismatch details.
Configurable action by confidence level: label, restrict, demote, review, or block.
Generator families, synthetic content rate, enforcement metrics, and audit exports.
Aligned to the standards your trust depends on.
Drops into the AI stack you already run. Connect the agents, channels, and data systems where verification has to happen — no rip-and-replace.
Built for every platform receiving user content.
Social Network
AI image, video, and audio detection at upload with platform-wide policy enforcement
News Organization
C2PA manifest validation on contributor uploads with provenance chain display
Marketplace
AI-generated listing photo detection for INFORM Consumers Act alignment
Creator Platform
Verified creator and AI content detection bundle for premium creator tier
Dating App
AI-generated profile photo detection at signup and on every new photo upload
Real or AI? Your users deserve to know.
Drop the SDK in. Decide at upload. Display the trust signal.
Decision record
Evidence attached
Audit trail
Signal chain
Built to fit the workflow you already run.
