Witnesses swat down facial recognition concerns of House committee members
If U.S. lawmakers are warming to facial recognition systems, it was not evident during yesterday’s House Committee on Homeland Security. House representatives questioned three government witnesses about the trustworthiness of the biometric technology and the public employees who wield it.
While genuflecting to the memory of September 11, which unleashed ringing calls for any and all terrorism-defeating technology, committee members questioned the perceived invasiveness of one of the very tools demanded. Of equal concern to some on the committee were reports of false positives, and how that kind of mistake could harm individuals.
Witness testimony described a biometric face-scanning program being used by the U.S. Customs and Border Protection agency that is comparatively limited in scope and that is adhering to federal civil-rights and -liberties laws and policies. At the same time, the agency is preparing to upgrade its software to one reported to be very accurate by industry and government standards.
Almost 44 million people (U.S. citizens and otherwise) have been scanned as part of the program, John Wagner, deputy executive assistant commissioner of the agency’s field operations, told the committee.
The idea is to weed out people who are using someone else’s documentation. Agents to date have used scans to identify 252 people at a land border trying to gain entry using someone else’s travel documents, he said.
Between 97 percent and 98 percent of photo scans, Wagner said, are matched to those in the agency’s database.
No surveillance is used in any aspect of the program, and people entering the United States are made aware of a photograph being taken of their face, he said.
The photo scan is digitally compared to other pictures that that person has submitted to the U.S. government for identification, including the digital photo embedded in the passport itself. Agency face scans of U.S. citizens are deleted within 12 hours.
When scans do not match previously submitted images, Customs and Border Protection agents visually compare the person to the passport photo, as they have done since photos were added to passports in 1914. If a visual match is made, the traveler is free to go. If there are doubts, the person is pulled aside for questioning.
Wagner said no one has been matched to the wrong database image.
“We’re not seeing false positives,” he stated.
DHS Deputy Officer for Programs and Compliance in the Office for Civil Rights and Civil Liberties Peter Mina also testified to the committee about the agency’s efforts to avoid civil rights violations.
Bennie Thompson (D-MS), chairman of the homeland-security committee, who opened the meeting by questioning the accuracy of facial recognition algorithms remained concerned. It is hard to ignore reports that software is only really good at recognizing white, middle-aged men.
“This is unacceptable,” Thompson said.
Witness Chuck Romine, a director with the National Institute of Standards and Technology (NIST), testified that mismatching is generally a matter of garbage in, garbage out.
“Different algorithms perform differently,” Romine said. At this point, he probably has one of the best perspectives on the matter. NIST has looked at software bias, examining 189 facial-recognition algorithms supplied by 99 software developers using a straw database of 18 million images of 8.5 million people from federal databases.
“In the highest-performing algorithms, we saw undetectable” levels of bias. One of the best such algorithm, NEC-3 from NEC Corp., is due to roll out next month in border units nationwide.
Article Topics
accuracy | biometrics | border management | CBP | DHS | facial recognition | homeland security | NEC | regulation | United States
Comments