1st Competition on Identity Documents Fraud Detection
Detect next-generation identity document fraud across physical manipulations, GenAI-driven digital edits, and print-and-capture attacks - on a globally diverse, under-represented set of document types.
Lost by American adults to identity fraud in 2024 - up $4B year-over-year.
Year-over-year surge in AI-driven digital document forgeries.
How often deepfake-based identity theft incidents now occur.
Existing benchmarks have saturated on purely digital artifacts and under-represent the global diversity of real identity documents, leaving production verification systems exposed to the next generation of attacks. The FREUID Challenge - the first competition built on the new FREUID dataset, contributed by the Microblink Fraud Lab - asks participants to detect identity document fraud across a uniquely realistic threat surface that combines:
The dataset deliberately spans under-represented document types to force cross-domain generalization. Top teams will present live at IJCAI-ECAI 2026 in Bremen.
Build models that perform accurately across highly under-represented document types.
Move beyond fragile GenAI pixel noise toward semantic inconsistencies in physically printed and captured forgeries.
Foster detectors that adapt to open-ended, continuously evolving fraud strategies rather than overfitting to known attack vectors.
FREUID is a proprietary collection of high-fidelity bona-fide and fraudulent documents provided by the Microblink Fraud Lab. It is designed to surface the failure modes hiding behind the saturated metrics of existing benchmarks.
Digital forgeries are physically printed and captured to obfuscate the digital noise that SOTA detectors currently exploit. Additionally, physical forgeries are done on top of the printed documents to cover for such domain of attacks (these are again captured).
Forgeries created using accessible multimodal (text + image) GenAI editing tools, reflecting the real modern threat landscape.
Document types that are under-represented in existing datasets, to test cross-document generalization.
Side-by-side comparison of FREUID against the closest existing identity-document fraud benchmarks.
| Feature | PAD-ID Card 2025 | FantasyID | IDNet | FREUID (Ours) |
|---|---|---|---|---|
| Primary focus | Presentation Attack Detection (PAD) | Digital manipulation detection | Large-scale synthetic dataset for ID fraud | Physical & digital manipulations on under-represented document types |
| Bona-fide data | Printed / captured plastic cards | Printed / captured “Fantasy” (mocked) plastic cards | Synthetic | Synthetic + printed / captured plastic cards |
| Manipulations / tampering | Physical & digital manipulations | Digital manipulations | Digital manipulations | Physical & digital manipulations + print / capture |
| Diversity | 24 countries, 155 unique doc types (train); 4 LATAM countries, 20 doc types (test) | 13 unique doc types / 10 languages | 10 US and 10 EU document types | 7 document types (Asian and African) with diverse scripts (Latin, Arabic) |
We adopt standard metrics from the presentation attack detection literature.
Area under the Detection Error Trade-off (DET) curve. A single scalar capturing the trade-off between false-accept and false-reject errors across operating points.
Attack Presentation Classification Error Rate measured at a fixed 1% Bona-Fide Presentation Classification Error Rate - the production-relevant slice of the DET curve.
Powered by a validation subset; updates with every submission so teams can iterate quickly.
Computed on a held-out test set that is never released, ensuring the leaderboard reflects true generalization.
Sponsored by Microblink. Conference entries provided to top performers (workshops/competitions part).
Dataset, prize pool & lead organization. The FREUID dataset is contributed by the Microblink Fraud Lab.
Have a question about eligibility, the dataset, or the evaluation protocol? Reach out and we'll respond as quickly as we can.
The challenge is open to academic and industry teams worldwide. We anticipate around 20 teams. Microblink employees, organizers and members of their immediate research groups are not eligible for prizes but may participate informally.
More information available soon.
Up to five members per team. A person may only be a member of a single team. Cross-affiliation teams are encouraged.
Yes - any publicly available pre-trained model or dataset may be used, provided the license is compatible and the artifact is fully cited in the team's report. Use of proprietary data that is not freely accessible is not permitted.
Yes. To be eligible for competition, teams must release their training and inference code under an OSI-approved open-source license, together with a short technical report.
No. The FREUID dataset is licensed for non-commercial research use only. Reach out to freuid-challenge-2026@microblink.com to discuss commercial licensing.
Top-three teams are strongly encouraged to attend the live award showdown on August 18-21, 2026. Conference entries are provided by the organizers to facilitate this (workshops/competitions part).
More information available soon.