Developers attempt to keep edge from compounding biometrics bias, accuracy issues

From advances in chips and sensors to algorithms, biometric authentication systems have undergone a period of rapid development. This is very evident in the video surveillance market, where speed and accuracy have made gains on the back of developments in edge AI. Still, the problem of bias in AI, which can lead to inaccurate and even life-threatening decisions, has not been completely solved.
That point was raised again by well-known industry analyst Rob Enderle who recently wrote that edge AI has a “potentially higher likelihood of bias” than even AI performed in central cloud services. The benefits of edge AI include faster response times, reduced costs due to lower bandwidth usage, and improved privacy by not requiring biometric information such as images of faces to be sent across networks.
One drawback of edge AI is that there is less opportunity for machine learning algorithms because they do not have as much data available. Decision making can be impacted by how much data the AI is trained on and how much it sees when deployed.
Enderle noted that “every AI effort is at risk of bias” and that these issues can result in “huge AI accuracy issues over time.” AI systems are powered by a lot of data. But if the data is biased, then it can lead to biased results.
“The core issue…is that if that remote inference AI fails to send some critical piece of information, the centralized AI will either act, or fail to act, in error. And if you are talking about a critical problem, the AI not only will start making mistakes, it will make them at machine speeds,” Enderle wrote.
Even with good intentions, humans have built many flawed datasets and algorithms that perpetuate bias in society. This can include anything from racial or gender discrimination to financial redlining. In terms of biometric systems, these accuracy issues can play out when an image recognition system used by a police department misidentifies a face with dark skin, causing an innocent party to be stopped and searched.
The issue of edge AI accuracy in video surveillance systems is a growing challenge. Market research firm Omdia forecasts that the video surveillance market revenues will grow to $31.9 billion by 2025 with a total CAGR of 7.1 percent between 2020-2025. By 2025, the firm expects that up to 64 percent of all digital camera and networked video recorders will include AI capabilities.
One solution for computer vision at the edge is to use dedicated, low-power and high performing AI processor chips capable of handling deep learning algorithms on the device. New generations of chips from startups like Hailo and incumbents like Renesas and Maxim can process more data; companies like Edge Impulse (which just secured $34 million in funding) offer developers efficient algorithms for inferencing images and other data.
Still, the problem of training models to ‘see’ properly is not necessarily directly addressed. That is where Enderle pointed out that sometimes a different approach to model training might be worth investigating. That approach, called causal inference, can potentially eliminate most bias at its source — before it enters machine learning models and products. Enderle pointed out that IBM is one developer of open-source software to help leverage the causal inference approach to machine learning.
IBM has defined causal inference as consisting of “A set of methods attempting to estimate the effect of an intervention on an outcome from observational data.” The company released the IBM Causal Inference 360 Toolkit to allow people to use multiple tools “To move their decision-making processes from a ‘best guess’ scenario to concrete answers based on data,” according to IBM researchers. The software is available for download on GitHub.
It is worth noting that IBM first made the software available in 2019. The code has continued to be updated since then, which should provide developers in the biometric market some assurance that the software itself is stable and ready for use.
Does using the causal inference approach guarantee that biometric identification systems will be perfect? Probably not, but hopefully vendors can use it to continue to advance the industry.
Article Topics
accuracy | AI chips | biometric identification | biometric-bias | biometrics | biometrics at the edge | edge AI | edge cameras | machine learning | research and development | video surveillance
Comments