Researchers improve algorithms’ robustness for biometrics, image recognition applications
Artificial intelligence is not only about pattern recognition accuracy. Algorithms need to be able to encounter problems and keep going.
With that in mind, researchers at Kyushu University say they have created a new way to improve the robustness of facial recognition algorithms — the so-called raw zero-shot method.
Lead researcher Danilo Vasconcellos Vargas says too much is made of accuracy and too little focus is on how AI operates outside the lab.
“We must investigate ways to improve robustness and flexibility,” Vargas says. “Then we may be able to develop a true artificial intelligence.”
Described in an article in the science journal PLOS ONE, the raw zero-shot method is designed to assess how neural networks handle unknown elements. This could yield benefits in understanding how generative adversarial networks can be used to defeat biometric algorithms and other AI systems.
“There is a range of real-world applications for image recognition neural networks, including self-driving cars and diagnostic tools in health care,” says Vargas, who is with Kyushu’s Faculty of Information Science and Electrical Engineering.
“However, no matter how well-trained the AI, it can fail with even a slight change in an image,” he says. And, of course, the quality of datasets is paramount for the correct training of machine learning algorithms.
In fact, highly precise algorithms sometimes are broken by elements that are impossible to detect with the human eye.
To understand issues connected with the malfunctioning of image recognition, the Kyushu researchers applied the raw zero-shot method to 12 artificial intelligence algorithms.
“If you give an image to an AI, it will try to tell you what it is, no matter if that answer is correct or not,” Vargas explains.
“Basically, we gave the AIs a series of images with no hints or training. Our hypothesis was that there would be correlations in how they answered. They would be wrong, but wrong in the same way.”
The rationale behind the research was to understand how the AI was reacting when processing unknown images. That method could then be used to analyze why algorithms break when faced with single-pixel changes.
Among the algorithms analyzed by Vargas’s team, Capsule Networks (commonly referred to as CapsNet) reportedly produced the densest clusters, resulting in the best transferability of problem-solving knowledge amongst neural networks.
“While today’s AIs are accurate, they lack the robustness for further utility. We need to understand what the problem is and why it’s happening. In this work, we showed a possible strategy to study these issues,” he adds.
The research findings come weeks after Kyushu University published another biometrics-focused paper about breath recognition as a possible chemical biometric identifier.
Article Topics
adversarial attack | AI | biometrics | image recognition | machine learning
Comments