Double deepfaked with OnlyFake ID
By Rob Brown, Business Development Manager at Inverid
Welcome to the future of deepfakes and their fake IDs…
Imagine you’ve bought the world’s best Generative AI Fake ID detector, built with Machine Learning. It promises to spot all deepfakes and their presented identity documents (IDs). Its model is trained on a small laboratory sample of known valid and known fake IDs. It predicts whether a presented ID fits a genuine or fake pattern.
Your adversary has Generative AI that keeps trying different patterns, creating images, with ever more realism, movement, and lighting effects. Building new models of presentable IDs. Until one day it sneaks past your GenAI fake ID detector. What did your detector learn? NOTHING at that moment.
Now there’s information asymmetry. You don’t know that newest customer is not what it seems – it used an unknown fake ID. But the attacker’s system now knows what works. It learns from the win. Your detector will only learn much later, when the fraud has played out. Maybe months, maybe years later. Meanwhile more deepfakes sneak through unnoticed using their born-in-the-wild unknown fake IDs.
When the fraud is discovered, there’s another issue – how do you train your model to spot the “zero-day” fake ID again? You’ll have to go back to the evidence collected in the identity verification session – the user’s selfie and photo ID. How else does it learn from REAL proven fraud?
How else can it spot the others that got through using the same pattern?
Think through the implications – you will need:
a database that holds every customer’s enrolment video and identity document, ready to be re-inspected at any point in time and used as training material should a fraud be discovered.
This huge database would be an attacker’s dream: full of every validated customer’s ID and selfie that could be replayed to create accounts and commit fraud in that customer’s name on other services. Good luck protecting that and pray that everyone else with a similar database has protected theirs too. It’s a password database breach on steroids waiting to happen.
The attackers won’t stop creating.
Some people are astonished a synthetic image of a photo ID is more convincing when it is placed on a bedsheet or a carpet. What next? GenAI rug detectors? More realism means placing a thumb on the document – everyone knows it’s impossible to make a passport stay open on its photo page by itself. Perhaps your fake ID detector is really a thumb detector. Who knew? Just like the AI labelling photos of wolf or dog – the model “learned” that wolves generally appear in photos that have snow in the background, then mislabeled huskies as wolves. AI is great at spotting patterns which is why it excels at optical character recognition.
Won’t videos of photo IDs will save us?
Not if you believe the story of a company that lost $25m because an employee was tricked on a Zoom call. Open AI Sora is a sign of things to come. I’ll freely admit that my coding skills are limited to crtl-c, ctrl-v on the command line. But if I were a skilled hacker, able to train models, I’d be looking at how to simulate realistic looking videos of identity documents. Perfecting the art of holograms and how they change appearance under different light and angles of incidence.
If I were a skilled hacker, I’d find free apps that contain vast video libraries of international document designs. I would reverse engineer them to create video-realistic models for my fake IDs – with fully faked “optical protections”. The app’s EULA would not stop me. But I’m not smart enough for that, so I won’t. But someone out there will. And then what? Convince Apple to build a UV lamp into iPhones?
Here are some other questions you should be asking:
- What happens when deepfakes get so good you can’t tell the difference?
- What training data is used and how is it trained on REAL fraud?
- How does it integrate and correlate signals with your other fraud detection systems?
- How often are new models rolled out in production?
- Where is all the photo and video evidence stored and what happens if it is breached?
- How long is that data stored for?
- What records do you keep of what ML models were used in verification sessions?
- How will you find and flag potential deepfakes when they are later under suspicion?
- What will you say to customers when you need them to re-verify?
- What evidence of authenticity can you present in a court of law to resolve a dispute?
- Are there easier ways to defend from fake IDs than fighting GenAI with AI?
Deepfakes don’t have chips in their fake IDs.
Stop accepting photos of identity documents as proof of authenticity. Is it time the law caught up with reality? Times have changed.
Don’t think that videos of “optically protected” documents will help. They carry no proof of protection remotely. If it’s visible to the human eye and phone camera – remember it’s Plaintext Presentation = Poor Protection. Start putting cryptographically secured proofs from chipped identity documents first. Demand chip-to-cloud relay and replay prevention in the verification. Use data protection compliant services and minimize stored data. Secure your customers and yourselves.
AI won’t save you from Gen AI. Cryptographic proof will. Use AI to extract data, but not to verify.
About the author
Rob Brown, Business Development Manager at Inverid, is a Brit who’s had chips with everything throughout his career – RFID tags, smart cards, mobile processors, IoT and passports. He’s also a mountain-bike coach who finds there’s always something to learn from every crash. A crashed and smashed phone in the Alps led to a world of digital pain and realisation: recovering access is going to get a lot harder with device bound passkeys and wallet credentials. Unless we use something smart and secure that a billion people already have: their chipped passports and ID cards. Connect with Rob on LinkedIn.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.
Article Topics
deepfake detection | deepfakes | generative AI | Inverid | OnlyFake | Sora
Comments