5 reasons why biometric face scanning in schools should be banned — researchers

Categories Biometrics News  |  Facial Recognition  |  Schools

face-biometrics-privacy-children

Research from the University of Michigan unequivocally finds that biometric facial recognition in classrooms should be banned.

A study conducted by the university’s Ford School of Public Policy cites five significant negative outcomes likely to result from the use of face scanning in classrooms.

The research suggests that doing so will potentially “weaponize” racism, normalize surveillance and erode privacy, narrow the definition of the “acceptable student,” commodify data and institutionalize inaccuracy.

“Because (facial recognition) is automated,” the report states, “it will extend these effects to more students than any manual system could.”

Facial recognition likely will have the same net effect on marginalized individuals as do predictive policing programs, the report’s authors write.

On paper, both concepts appear designed to deliver services without human bias. The reality is that AI systems, where they are deployed and how they are trained “serve to reproduce bias on a systemic level.”

The paper cites research by others that indicates that datasets used to train algorithms are overwhelmingly dominated by lighter-skinned individuals, contributing to consistently higher error rates for facial recognition systems when they encounter darker skinned people.

NIST has an open-ended project that reviews face-scanning algorithms submitted by developers, and project researchers have found that quality varies from the near-comical to software that has “undetectable” levels of bias.

Face scanning in classrooms also would normalize surveillance, according to the Ford School researchers. Based on trends in the use of CCTV, administrators will grow increasingly comfortable with the technology, likely expanding deployments and system capabilities. At the same time, according to the paper, those intensively surveilled experience a growing sense powerlessness.

It could redefine what an acceptable student looks like. Anything considered outside the norm including clothing, skin color, ornamentation, disabilities and gender non-complying affect likely would be flagged by AI. History proves that people flagged for superficial prompts such as these typically will have negative experiences with authority figures.

At the same time, new data collected on students would find an eager market. There is precedent. No data is as coveted as that which is gleaned from children and young adults.

Indeed, no fewer than 121 state-level student privacy laws have been passed from 2013 to 2019, according to Student Privacy Compass. Clearly, privacy violations on the part of schools are a broad concern.

Parents, sometimes allied with educators, have for decades fought to keep student data from being sold or traded in commercial transactions. In the early 2010s, parents had to push hard against inBloom Inc., an analytics company funded by the Gates Foundation, using big-data principles to improve school performance. Concerns, never documented, that the information would be sold forced inBloom to close in 2014.

In another instance, a school district in the San Diego area allegedly offered to sell student information to a private firm in 2012, according to coverage by the San Diego Reader. New Mexico’s attorney general sued Google in February, alleging that it has illegally collected data including student’s voice recordings and search histories. Google has asked a federal judge to dismiss the case.

Last of the potential sins is the institutionalization of system inaccuracy. The authors maintain that facial recognition systems are not as accurate nor as unbiased as they are billed. Yet all the stakeholders are predisposed to accept the marketing.

Taxpayers might see the technology as a way of flattening tax growth. Parents, administrators and the police might see it as a way to maintain order and prevent recurring shooting rampages instead of addressing root causes. Politicians could tell voters they are doing something to secure campuses.

More generally, U.S. citizens invented the concept of technology solutionism. For the most part, they expect technology to make life better, more affordable and safer. That, combined with Americans’ short-term memory, mean that every day, all kinds of systems are invented, deployed and underperform without much in the way of consumer disapprove.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics