FB pixel

Practical way to have AI flag its own uncertainty reported. Could be used to spot deepfakes

Categories Biometric R&D  |  Biometrics News
Practical way to have AI flag its own uncertainty reported. Could be used to spot deepfakes
 

Researchers say they have found an efficient way for an AI algorithm like those used in biometrics to judge how confident it is with its decisions. The technique, which reportedly does not impact a model’s performance, could also be used to spot deepfakes.

The software can quickly report its decision, but also the confidence it has in the underlying input data and in the decision itself.

Armed with this information, users can decide in real time if they need to rework their model to get better quality output, according to researchers from the Massachusetts Institute of Technology and Harvard University.

Algorithms can be written today that judge and report on their confidence, but the process is resource intensive and comparatively slow. Users needing uncertainty analysis have to run a neural network repeatedly to get an idea of how confident the algorithm is.

These models are operationally impractical for some of the eagerly anticipated roles for AI, such as autonomous flying or driving, which will depend on raw compute power and near-instantaneous decision making.

The research team developed what they call deep evidential regression to estimate uncertainty from a single run of a neural network.

To test their innovation, they trained a neural network to analyze a monocular color image, estimating the distance between a camera lens and each of the image’s pixels. High levels of uncertainty were projected by the network for pixels where it predicted the wrong depth.

In other words, it was less certain in cases in which it made wrong predictions.

The neural network spotted doctored images, too. Adversarial noise levels were increased in some images submitted to the network.

While the effect was “barely perceptible to the human eye,” according to the researchers, the model reliably flagged the manipulations as highly uncertain.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Opinions on UK Online Safety Act emphasize importance of enforcement

Online safety legislation is making headlines around the world. But in places where laws have taken effect, are they proving…

 

UK Home Office raises estimate for passport contract to 12 years, £576M

The UK Home Office has opened a third round of market engagement for its next major passport manufacturing and personalization…

 

US lawmakers move to restrict AI chatbots used by kids

A bipartisan pair of House and Senate bills would impose new federal restrictions on AI chatbots, including a ban on…

 

Utah age assurance law for VPN users takes effect this week

Privacy advocates and virtual private network (VPN) providers are up in arms over Utah’s Senate Bill 73 (SB 73), “Online…

 

CLR Labs wins ISO 17025 accreditation for biometrics testing across EU

Cabinet Louis Reynaud (CLR Labs) has been accredited for ISO/IEC 17025, the international standard for testing and calibration laboratories, in…

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events