Practical way to have AI flag its own uncertainty reported. Could be used to spot deepfakes
Researchers say they have found an efficient way for an AI algorithm like those used in biometrics to judge how confident it is with its decisions. The technique, which reportedly does not impact a model’s performance, could also be used to spot deepfakes.
The software can quickly report its decision, but also the confidence it has in the underlying input data and in the decision itself.
Armed with this information, users can decide in real time if they need to rework their model to get better quality output, according to researchers from the Massachusetts Institute of Technology and Harvard University.
Algorithms can be written today that judge and report on their confidence, but the process is resource intensive and comparatively slow. Users needing uncertainty analysis have to run a neural network repeatedly to get an idea of how confident the algorithm is.
These models are operationally impractical for some of the eagerly anticipated roles for AI, such as autonomous flying or driving, which will depend on raw compute power and near-instantaneous decision making.
The research team developed what they call deep evidential regression to estimate uncertainty from a single run of a neural network.
To test their innovation, they trained a neural network to analyze a monocular color image, estimating the distance between a camera lens and each of the image’s pixels. High levels of uncertainty were projected by the network for pixels where it predicted the wrong depth.
In other words, it was less certain in cases in which it made wrong predictions.
The neural network spotted doctored images, too. Adversarial noise levels were increased in some images submitted to the network.
While the effect was “barely perceptible to the human eye,” according to the researchers, the model reliably flagged the manipulations as highly uncertain.