EFF calls for independent oversight and privacy protection in facial recognition report
U.S. law enforcement agencies are implementing facial recognition with little in the way of oversight or privacy protections, which could lead to faulty systems, the implication of innocent people in crimes, and negative outcomes for people of color, according to a report released by the Electronic Frontier Foundation (EFF).
“Face Off: Law Enforcement Use of Face Recognition Technology,” written by EFF Senior Staff Attorney Jennifer Lynch, says that new technologies are being developed and adopted without realistic field testing or legal protections against misuse, leading to a risk of constitutional rights violations. Furthermore, differences in system accuracy for identifying different populations, the collection practices and documented racial bias in some police practices could lead to the systems disproportionately affecting people of color.
The research body on bias in facial recognition systems has recently been bolstered by a study from M.I.T. researcher Joy Buolamwini, which showed misidentification rates of up to 35 percent for some populations. A House subcommittee is holding hearings to gain an understanding of the associated challenges.
“The FBI, which has access to at least 400 million images and is the central source for facial recognition identification for federal, state, and local law enforcement agencies, has failed to address the problem of false positives and inaccurate results,” said Lynch. “It has conducted few tests to ensure accuracy and has done nothing to ensure its external partners—federal and state agencies—are not using face recognition in ways that allow innocent people to be identified as criminal suspects.”
The four-part report examines the key issues with facial recognition, the FBI’s face recognition programs, which provides facial recognition services to many other law enforcement agencies, and the systems of which the EFF says exemplifies the problems with facial recognition. Specifically, the report alleges the FBI has failed to meet the transparency requirements mandated by federal law. It also explores potential future capabilities of and concerns with the technology, and presents recommendations for policy makers.
“People should not have to worry that they may be falsely accused of a crime because an algorithm mistakenly matched their photo to a suspect. They shouldn’t have to worry that their data will end up in the hands of identify thieves because face recognition databases were breached. They shouldn’t have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities,” said Lynch. “Without meaningful legal protections, this is where we may be headed.”