Report on AI use by U.S. government cites biometrics as example of need for internal expertise

ethical and responsible use of biometrics

The U.S. government must improve its internal AI expertise and capacity to properly leverage the transformative power of biometrics and other technologies, according to a report from experts at Stanford and New York University. The report is meant to guide future government decisions, but along the way reveals some details about the development of a major biometric program.

The use of face biometrics instead of iris recognition for border checks by U.S. Customs and Border Protection (CBP) was decided on due to low iris capture rates and because facial images are easier to come by. An internal report cited in “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies” shows CBP officials were unable to explain the failure rates produced by the technology, due to its proprietary nature.

The report was commissioned by the Administrative Conference of the United States, which is an independent federal agency tasked with improving government processes. It broadly examines the use of AI in various areas of government, noting that 45 percent of federal agencies studied have at least experimented with AI and machine learning tools.

The report covers facial recognition and other biometrics use in sections on CBP’s AI use cases and future, as well as challenges ahead. It notes that federal agencies’ plans for biometric screening at airports may require rulemaking, as Congress “has never clearly authorized facial recognition for U.S. citizens.”

More emphasis is put, however, on issues related to accountability and government technical capacity.

“If CBP fails to understand the flaws in its own technology, it can expose itself to unknown vulnerabilities and fail to detect adversarial attacks,” the report authors write. “More broadly, agencies that lack access to a contractor’s proprietary technology may be unable to troubleshoot and adapt their own systems.”

The report suggests that the trials at Otay Mesa Port of Entry show the knowledge gap that reliance on contractors for advanced technology creates, and the risks associated with that knowledge gap.

“This example underscores both the potential accountability costs of procurement-generated AI tools and also the importance of developing and maintaining a baseline level of internal technical capacity even when an agency chooses to buy AI tools,” according to the report.

The report concludes that preserving accountability with AI systems is challenging, and can only be achieved through improving the government’s technical capacity. Privately produced AI tools may have the most advanced technology, but be less suited to performing government tasks in line with legal requirements and agency needs.

“CBP’s use of facial recognition reflects a bigger concern with contractor-provided AI tools,” David Freeman Engstrom, a Stanford University professor and a lead author of the report, told Bloomberg Law. “The algorithms are not just technically opaque. The companies that make them may also be able to use trade secret protections to block disclosure of their technical details in court.”

A recent budget request for fiscal 2021 seeks an increase in funding for non-defense AI and quantum AI from $1 billion in fiscal 2020 to $2 billion in by fiscal 2022. Federal funding of non-defense AI research and development is expected to be reported for the first time by the Office of Science and Technology Policy in fiscal 2020, according to Bloomberg Law.

Related Posts

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics