FB pixel

Image-modifying attacks can be foiled, making facial recognition more reliable — researchers

Image-modifying attacks can be foiled, making facial recognition more reliable — researchers
 

An AI training technique effective at thwarting adversarial attacks that could have fatal results in autonomous vehicles also makes it easier for algorithms to find the correct (and safe in this context) solution.

Duke University researchers say they have found a way to foil adversarial attacks while minimizing decreases in algorithm performance. Their results could immunize facial recognition and autonomous navigation against attacks aimed at these increasingly popular AI capabilities.

The researchers were looking for methods of improving gradient regularization in neural network defense that would minimize training computational complexity. Many existing techniques for securing facial recognition and other neural networks from adversarial attacks are considered impractical due to the computational power the require, the researchers write.

They proposed a form of complex-valued neural network capable of boosting gradient regularization used on “classification tasks of real-valued input in adversarial settings,” according to the Duke paper.

The research indicates that, given comparable storage and complexity, a gradient-regularized complex-valued neural network (CVNN) outperforms real-valued neural networks.

An article in The Register says the new method could increase the quality of computer vision algorithm results as much as 20 percent by adding two layers of complex values comprised of real and imaginary number components.

This improvement makes the performance of the networks trained with complex values and gradient regularization similar to that of networks trained on adversarial attacks, but without prior knowledge of those attacks.

Work on adversarial attacks for defeating facial recognition systems continues to be conducted, meanwhile.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Federal law enforcement must now conduct transparent, standardized AI field testing

A White House advisory panel voted to approve a 24-page report that sets forth specific actions that all federal law…

 

Incode, Tech5, Idnow, Suprema level up with certifications, standards compliance

The global identity verification industry is witnessing advancements as biometric technology providers, including Incode Technologies, Tech5, Idnow, and Suprema, announce…

 

Texas age verification law mostly unconstitutional: US district court judge

In court battles over age assurance, victory can be a matter of perspective. A Texas judge has ruled that the…

 

Switzerland takes another step towards digital ID

Switzerland’s national digital identity is inching closer to reality. On Tuesday, the country’s Council of States approved the project’s regulative…

 

Decentralized ID firm Gataca launches age verification for adult entertainment platforms

Decentralized digital identity company Gataca has launched a new age verification product targeted at the adult entertainment industry. Gataca Vouch…

 

Face Forensics unveils new biometric identification system for severely damaged faces

In cases where faces are severely damaged, such as from trauma or decomposition, a new face biometric system known as…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events