FB pixel

US AI Advisory wants more disclosure on facial recognition use by federal law enforcement

Transparency and details needed, even if sometimes ignored
US AI Advisory wants more disclosure on facial recognition use by federal law enforcement

The U.S. National AI Advisory Committee is preparing to recommend a requirement for the country’s federal law enforcement agencies to publish yearly summaries of their use of facial recognition and other higher-risk AI tools, FedScoop reports.

The requirement would be part of the inventory of AI applications they are already obliged to submit under a memo published by the Office of Management and Budget in March. The additional transparency would help reassure Americans by providing more helpful information about the scope and quality of AI use, Committee members say.

Miami Police Department told Committee members during a fact-finding mission earlier this year that it uses facial recognition about 40 times a year.

“If we knew an agency, for example, was using facial recognition, some observers would speculate that it’s a fundamental shift into a sort of surveillance state, where our movements will be tracked everywhere we go,” NAIAC Law Enforcement Subcommittee Chair Jane Bambauer told FedScoop. “And others said, ‘Well, no, it’s not to be used that often, only when the circumstances are consistent … with the use limitations.’”

Committee member and ROC CRO Benji Hutchinson says that producing the reports should be easy for agencies, but coordination and standardization could be more challenging. The different layers of law enforcement, the data sharing agreements between them, and MoUs already in place could complicate transparency efforts, he says.

The Law Enforcement Subcommittee’s recommendations will also include performance testing before the adoption of any AI technology, and an investment by the federal government in state-level repositories of body-camera footage that academic researchers can access, according to the report.

Brookings podcast foreshadows recent developments

In light of current developments in AI and public policy in America, the Brookings Institution’s TechTank podcast revisits a 2022 episode tackling the impact of AI on civil rights.

Lisa Rice, President and CEO of the National Fair Housing Alliance, says that “discriminatory laws that we have passed down through the centuries” set up intrinsic bias that remains a force and creates an environment of racial disparity into which AI is deployed. The history of these laws is not taught in classes and more people disbelieve that they exist than know about them, she says.

Rice also notes the National Fair Housing Alliance’s then-new framework for fairness assessments based on “purpose, process and monitoring” as a potential tool for auditors.

University of Virginia Data Activist in Residence and Criminologist Renee Cummings argues that accountability and transparency are needed to push back on the embedded use of “Blackness as a data point for danger.”

When AI is deployed in Smart City or law enforcement applications, Cummings, says, communities with low trust in the technology and agencies involved are subject to surveillance that has not yet yielded the hoped-for reduction in crime rates.

Rice says that laws to protect civil rights are on the books, but each individual violation cannot be litigated, and “unfortunately, our federal regulators haven’t been able to keep up with the tech.” Hence, the need for different mechanisms to ensure the accountability, transparency, explainability and auditability Cummings refers to.

The additional reporting responsibilities for federal agencies using AI could contribute to that shift.

Host Dr. Nicole Turner Lee asks whether the EU’s special designation of high-risk AI applications could be a good example for U.S. regulation, and Cummings responds that it could be, as “regulation is something that we are yet to get right.”

EPIC fail

The Electronic Privacy Information Center briefly review the recent report from the Government Accountability Office on concerns with biometric technologies, highlighting that the majority of impacts reported to the GAO are negative, and stating that the best facial recognition algorithms have been proven biased.

Negative impacts were reported to GAO by more stakeholders than positive ones, but GAO takes an explicitly skeptical stance about how to understand those reports.

“However, information about the positive and negative effects is limited, as the stakeholders largely provided examples related to anecdotal or firsthand experiences or potential effects,” the report says.

EPIC also declares that the GAO found “that even the best algorithms retain racial and gender biases in controlled laboratory testing.”

The claim apparently references a much more nuanced paragraph within the GAO report, which refers to National Institute of Standards and Technology testing. “The accuracy of facial recognition, for example, has improved significantly over the last 4 years, with the best performing systems showing very little variation in false negative rates across different populations in laboratory testing,” GAO writes. “This is not true with false positive rates where performance differentials have decreased but differences remain.”

NIST said back in 2022 that differentials in false positives for the best algorithms are “undetectable.”

The latest assessment of false positive differentials in 1:N facial recognition algorithms by NIST shows false positive (or “match”) rates for all groups are below 0.005 for dozens of algorithms.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News


EU watchdog rules airport biometrics must be passenger-controlled to comply with GDPR

The use of facial recognition to streamline air passenger’s travel journeys only complies with Europe’s data protection regulations in certain…


NZ’s biometric code of practice could worsen privacy: Business group

New Zealand is working on creating a biometrics Code of Practice as the country introduces more facial recognition applications. A…


Demonstrating value, integrated payments among key digital ID building blocks

Estonia has achieved an enviable level of user-centricity with its national digital identity system through careful legislation and fostering collaboration…


Strata Identity launches uninterrupted identity services product

There are a few things that can be more annoying than your office computer logging you out of applications because…


Digital identities shaking up identity verification industry: Regula

The arrival of digital identities is shaking up how companies operate and verify identities. Regula has published a survey with…


Digital ID for air travel is DPI that could lift Africa’s economy, HID argues

As the African continent experiences economic growth and increasing global integration, the potential for enhancing air travel through improved digital…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events