FB pixel

US AI Advisory wants more disclosure on facial recognition use by federal law enforcement

Transparency and details needed, even if sometimes ignored
US AI Advisory wants more disclosure on facial recognition use by federal law enforcement
 

The U.S. National AI Advisory Committee is preparing to recommend a requirement for the country’s federal law enforcement agencies to publish yearly summaries of their use of facial recognition and other higher-risk AI tools, FedScoop reports.

The requirement would be part of the inventory of AI applications they are already obliged to submit under a memo published by the Office of Management and Budget in March. The additional transparency would help reassure Americans by providing more helpful information about the scope and quality of AI use, Committee members say.

Miami Police Department told Committee members during a fact-finding mission earlier this year that it uses facial recognition about 40 times a year.

“If we knew an agency, for example, was using facial recognition, some observers would speculate that it’s a fundamental shift into a sort of surveillance state, where our movements will be tracked everywhere we go,” NAIAC Law Enforcement Subcommittee Chair Jane Bambauer told FedScoop. “And others said, ‘Well, no, it’s not to be used that often, only when the circumstances are consistent … with the use limitations.’”

Committee member and ROC CRO Benji Hutchinson says that producing the reports should be easy for agencies, but coordination and standardization could be more challenging. The different layers of law enforcement, the data sharing agreements between them, and MoUs already in place could complicate transparency efforts, he says.

The Law Enforcement Subcommittee’s recommendations will also include performance testing before the adoption of any AI technology, and an investment by the federal government in state-level repositories of body-camera footage that academic researchers can access, according to the report.

Brookings podcast foreshadows recent developments

In light of current developments in AI and public policy in America, the Brookings Institution’s TechTank podcast revisits a 2022 episode tackling the impact of AI on civil rights.

Lisa Rice, President and CEO of the National Fair Housing Alliance, says that “discriminatory laws that we have passed down through the centuries” set up intrinsic bias that remains a force and creates an environment of racial disparity into which AI is deployed. The history of these laws is not taught in classes and more people disbelieve that they exist than know about them, she says.

Rice also notes the National Fair Housing Alliance’s then-new framework for fairness assessments based on “purpose, process and monitoring” as a potential tool for auditors.

University of Virginia Data Activist in Residence and Criminologist Renee Cummings argues that accountability and transparency are needed to push back on the embedded use of “Blackness as a data point for danger.”

When AI is deployed in Smart City or law enforcement applications, Cummings, says, communities with low trust in the technology and agencies involved are subject to surveillance that has not yet yielded the hoped-for reduction in crime rates.

Rice says that laws to protect civil rights are on the books, but each individual violation cannot be litigated, and “unfortunately, our federal regulators haven’t been able to keep up with the tech.” Hence, the need for different mechanisms to ensure the accountability, transparency, explainability and auditability Cummings refers to.

The additional reporting responsibilities for federal agencies using AI could contribute to that shift.

Host Dr. Nicole Turner Lee asks whether the EU’s special designation of high-risk AI applications could be a good example for U.S. regulation, and Cummings responds that it could be, as “regulation is something that we are yet to get right.”

EPIC fail

The Electronic Privacy Information Center briefly review the recent report from the Government Accountability Office on concerns with biometric technologies, highlighting that the majority of impacts reported to the GAO are negative, and stating that the best facial recognition algorithms have been proven biased.

Negative impacts were reported to GAO by more stakeholders than positive ones, but GAO takes an explicitly skeptical stance about how to understand those reports.

“However, information about the positive and negative effects is limited, as the stakeholders largely provided examples related to anecdotal or firsthand experiences or potential effects,” the report says.

EPIC also declares that the GAO found “that even the best algorithms retain racial and gender biases in controlled laboratory testing.”

The claim apparently references a much more nuanced paragraph within the GAO report, which refers to National Institute of Standards and Technology testing. “The accuracy of facial recognition, for example, has improved significantly over the last 4 years, with the best performing systems showing very little variation in false negative rates across different populations in laboratory testing,” GAO writes. “This is not true with false positive rates where performance differentials have decreased but differences remain.”

NIST said back in 2022 that differentials in false positives for the best algorithms are “undetectable.”

The latest assessment of false positive differentials in 1:N facial recognition algorithms by NIST shows false positive (or “match”) rates for all groups are below 0.005 for dozens of algorithms.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

New year will see mobile driver’s licenses come of age: Trinsic CEO

A recent webinar from Trinsic takes stock of the eID sector in 2024 and looks ahead to upcoming launches and…

 

Datasonic transforming into NexG with end-to-end digital ID plan, fundraise

Datasonic’s acquisition of Innov8tif Holdings is just one part of a transformation plan that also includes a major capital exercise…

 

DIF Lab launches to propel decentralized digital ID projects

The Decentralized Identity Foundation (DIF) is launching a new initiative to help build, test and scale decentralized digital ID solutions….

 

Lissi enters final stage of German EUDI wallet prototype competition

Digital ID wallet maker Lissi has qualified for the final phase of the German government competition to create a national…

 

Puerto Rico latest government in U.S. to launch its mDL for Apple Wallet

Contrary to certain recent public opinions about Puerto Rico, the island is among U.S. leaders in adopting mobile driver’s licenses…

 

Singapore expands passport-free biometric clearance

At Singapore’s bus halls of Woodlands and Tuas checkpoints, travellers will soon be able to enjoy passport-less clearance using face…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events