Image-modifying attacks can be foiled, making facial recognition more reliable — researchers
An AI training technique effective at thwarting adversarial attacks that could have fatal results in autonomous vehicles also makes it easier for algorithms to find the correct (and safe in this context) solution.
Duke University researchers say they have found a way to foil adversarial attacks while minimizing decreases in algorithm performance. Their results could immunize facial recognition and autonomous navigation against attacks aimed at these increasingly popular AI capabilities.
The researchers were looking for methods of improving gradient regularization in neural network defense that would minimize training computational complexity. Many existing techniques for securing facial recognition and other neural networks from adversarial attacks are considered impractical due to the computational power the require, the researchers write.
They proposed a form of complex-valued neural network capable of boosting gradient regularization used on “classification tasks of real-valued input in adversarial settings,” according to the Duke paper.
The research indicates that, given comparable storage and complexity, a gradient-regularized complex-valued neural network (CVNN) outperforms real-valued neural networks.
An article in The Register says the new method could increase the quality of computer vision algorithm results as much as 20 percent by adding two layers of complex values comprised of real and imaginary number components.
This improvement makes the performance of the networks trained with complex values and gradient regularization similar to that of networks trained on adversarial attacks, but without prior knowledge of those attacks.
Work on adversarial attacks for defeating facial recognition systems continues to be conducted, meanwhile.
Article Topics
accuracy | adversarial attack | AI | biometrics | dataset | facial recognition | object recognition | research and development | training
Comments