Questions regarding use of biometrics in AI raised during federally sponsored forum

Questions regarding use of biometrics in AI raised during federally sponsored forum

Concerns about privacy, including ways in which biometrics used in artificial intelligence (AI) could be used by law-enforcement agencies to violate civil liberties, was one a number of concerns about the use of biometric raised at a recent technology assessment forum convened by the Comptroller General of the United States, according the Government Accountability Office (GAO) report, Artificial Intelligence: Emerging Opportunities, Challenges, and Implications, prepared for the House Committee on Science, Space, and Technology.

The Comptroller General sponsored the forum in order “to gain a better understanding of the emerging opportunities, challenges, and implications resulting from developments in AI.”

The attendees represented industry, government, academia, and nonprofit organizations representing cybersecurity, automated vehicles, criminal justice, and financial services.

“Some of the participants … raised concerns about privacy, including ways in which AI could be used by law-enforcement agencies to violate civil liberties, and said that this is an area that needs policy solutions,” GAO’s report on the forum stated.

For example, “According to one participant, law-enforcement agencies’ use of facial recognition software raises concerns that the people being captured by the software could have their civil rights violated, including the right to freely speak and assemble. Some privacy researchers and advocates have said that such remote biometric identification could have ‘chilling effects’ on human behavior and threaten free speech and freedom to assemble.”

A 2011 privacy impact assessment prepared by the International Justice and Public Safety Network said, “[t]he mere possibility of surveillance has the potential to make people feel extremely uncomfortable, cause people to alter their behavior, and lead to self-censorship and inhibition.”

Georgetown Law’s Center on Privacy & Technology October 18, 2016, study, The Perpetual Line-up: Unregulated Police Face Recognition in America, pointed out, for example, that, “By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before,” adding, “Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans.”

“This is unprecedented and highly problematic,” the study said, noting that, “Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s.” But, “We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems. And we don’t know how any of these systems—local, state, or federal—affect racial and ethnic minorities.”

“We can begin to see how face recognition creates opportunities for tracking—and risks—that other biometrics, like fingerprints, do not,” the study continued. “Along with names, faces are the most prominent identifiers in human society—online and offline. Our faces—not fingerprints—are on our driver’s licenses, passports, social media pages, and online dating profiles. Except for extreme weather, holidays, and religious restrictions, it is generally not considered socially acceptable to cover one’s face; often, it’s illegal. You only leave your fingerprints on the things you touch. When you walk outside, your face is captured by every smartphone and security camera pointed your way, whether or not you can see them. Face recognition isn’t just a different biometric; those differences allow for a different kind of tracking that can occur from far away, in secret, and on large numbers of people.”

Another concern over the potential implications of biometric AI developments raised by attendees was, GAO said, “exactly how the data should be used, understood, and analyzed.”

As one forum participant noted, “machine learning and credit analytics could be used to collect what is called alternative data to help improve access to credit for individuals who do not meet traditional standards of credit worthiness or who have little or no credit history.”

GAO reported that the financial sector “may benefit from the adoption of AI … where it could be used to improve decision making and, in turn, improve fairness and inclusion for consumers. One participant stated specifically that machine learning could be used to help establish a potential consumer’s identity, which is required before they can gain access to banking and credit.”

But, “Establishing identities for this purpose is especially difficult in some parts of the world, though there is now a massive change underway to collect data for the purposes of identification,” GAO said.

While GAO pointed out that AI “holds substantial promise for improving human life and economic competitiveness in a variety of ways and for helping solve some of society’s most pressing challenges … according to experts, AI [also] poses new risks and could displace workers and widen socioeconomic inequality.”

“To improve or augment human decision making … AI can be used to gather an enormous amount of data and information from multiple locations, characterize the normal operation of a system, and detect abnormalities, much faster than humans can,” GAO said in its report on the meeting, adding, “According to one forum participant, AI is an appropriate technology for the cybersecurity sector because the cyber systems used to provide security generate a vast amount of data, and AI can be used to help determine what are normal conditions and what is abnormal.”

But in assessing acceptable risks and ethical decision making, “policymakers need to decide how they are going to measure, or benchmark, the performance of AI and assess the trade-offs,” GAO informed lawmakers. “For instance, what do evaluators compare the performance of AI to?”

One “participant stressed that the ‘baseline’ is current practice, not perfection — i.e., how humans are performing now, absent AI.”

Furthermore, GAO reported, “this participant [said] we do not have a firm understanding of current practice. On the other hand, as this participant emphasized, “[i]f we have to benchmark [AI] against perfection, as they say, the perfect will be the enemy of the good and we get nowhere. According to this participant, implementing AI will involve trade-offs,” and that these “include accuracy, speed of computation, transparency, fairness, and security.”

Other participants noted that regulatory questions should be resolved by a variety of stakeholders, including economists, legal scholars, philosophers, and others involved in policy formulation and decision making, and not solely by scientists and statisticians.

In addition to policies for incentivizing data sharing, improving safety and security, updating regulatory approaches, and assessing acceptable risks and ethical decision making, GAO said “the participants … also pointed out other policy issues. They emphasized that policymakers should consider a variety of other policies that could aid the widespread adoption of AI and mitigate its potential negative consequences.”

In implementing AI, for example, “one participant said that it should be a requirement that AI developers test for disparate impact before deploying their technology. This participant noted that such a requirement would be better complied with if the developer was not held liable for the impact.”

Rather, creating “safe harbors” in conjunction with testing would allow developers an opportunity to seek out input from others to address disparate impacts, GAO noted, adding, “Another participant said that it would be desirable to find ways to not only share data, but also best practices associated with using the data, including implementing and testing AI systems.”

Still “another participant highlighted the policy issue of the ‘information haves and have nots,’ or the ‘digital divide,’” GAO said. “This participant noted that many US households, particularly those with lower incomes, do not have access to the Internet and that if people are expected to become more knowledgeable and better trained to work in jobs, such as those that are augmented by AI, then all families need Internet access. In addition, some of the participants said that policymakers will need to consider what to do about the potential displacement of workers, including training programs.”

Concerning resources for research, GAO said “one participant said that there is a gap between private- and public-sector research, and that the public sector needs to work toward closing that gap. Otherwise most of the research that is conducted on AI will be to the benefit of the private companies that invest in it.”

Related Posts

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Biometrics Research Group

Biometrics White Papers

Biometrics Events

Explaining Biometrics