FB pixel

AI is everywhere. Can DHS trust what it’s telling them?

AI is everywhere. Can DHS trust what it’s telling them?
 

How much should we trust artificial intelligence? Governments are already responding to what is becoming one of the primary questions that humans face in the twenty-first century.

The EU’s AI Act has made news across Europe as a model for regulators worldwide. This week, the U.S. Department of Homeland Security (DHS) introduced its own new policies to ensure that the department uses AI responsibly, with specific guidelines for the oversight of facial recognition and face biometrics capture tools.

In a release, the DHS summarized the new directive and announced the appointment of the department’s first-ever chief AI officer. Eric Hysen, who is also the DHS’s chief information officer, will serve in the role.

Hysen said “the policies we are announcing today will ensure that the Department’s use of AI is free from discrimination and in full compliance with the law, ensuring that we retain the public’s trust.” The DHS lists efforts to fight fentanyl tracking, border control and child exploitation among its key missions that benefit from AI tools. But the risks its new policies promise to avoid are ripe for tracking, control and exploitation of a different kind.

Please look at the camera

The two policies outlined in the DHS release are the policy statement, “Acquisition and Use of Artificial Intelligence and Machine Learning by DHS Components”, and “Use of Face Recognition and Face Capture Technologies,” a directive. Both promise good behavior in using AI and biometric data capture technology. The policy governing AI says that “DHS will not collect, use, or disseminate data used in AI activities, or establish AI-enabled systems that make or support decisions, based on the inappropriate consideration of race, ethnicity, gender, national origin, religion, gender, sexual orientation, gender identity, age, nationality, medical condition, or disability.”

Likewise, the directive on face recognition says that “all uses of face recognition and face capture technologies will be thoroughly tested to ensure there is no unintended bias or disparate impact in accordance with national standards.”

But these standards rely on AI being a trusted partner. The fact is, algorithms in the field do not always do what we want or expect them to. While tech mavericks waffle over whether AI’s potential is cause for fear or celebration, and both large language models and facial recognition cameras become deeply embedded in our social infrastructure, it is worth asking about the quality of information the machines are providing.

AI is your drunk friend

Intelligence agencies are aware that the AI tools they are employing may not be the most reliable sources. Speaking at the 2023 Billington Cybersecurity Summit, the CIA’s chief technology officer, Nand Mulchandani, compared AI to a drunk friend whose word should be taken with a large grain of salt, Breaking Defense reports. Errors known as hallucinations, which occur when large language models and other machine learning systems make false correlations between data, are the technological equivalent of slurred speech; think about the grotesque extra fingers and limbs circulating in mock advertisements created with generative AI tools.

The probabilities that AI calculates, says Mulchandani, are exactly that: probable — but not certain. For some, that is not enough to establish genuine trust between human and machine.

When AI’s an empty kettle

In a piece published on The Conversation, Mark Bailey, who works in cyber intelligence and data science at National Intelligence University, lays out the problem with trusting AI to make the right decisions.

Citing that ever-useful nugget of moral philosophy, the Trolley Problem, Bailey points to two fundamental problems with AI. The first, the AI explainability problem, is that the volume of parameters AI defines and adjusts to make decisions based on statistical probability make it more or less incomprehensible to human reason.

“AI can’t rationalize its decision-making,” writes Bailey. “You can’t look under the hood of a self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.”

The second problem, the AI alignment problem, is that it operates in a vacuum of moral and ethical context. It feels no obligation or guilt or shame, emotions that can influence how humans make complex decisions. “Unlike humans,” says Bailey, “AI doesn’t adjust its behaviour based on how it is perceived by others or by adhering to ethical norms.” The ghost in our machine is really just a fancy calculator; it is not the Tin Man seeking a heart, but the Wizard’s baroque machinery of illusion.

What is it good for?

For the DHS, CIA and other intelligence agencies, however, AI’s limitations do not make it useless. In the DHS’s policy announcement, Secretary of Homeland Security Alejandro N. Mayorkas called AI “a powerful too we must harness effectively and responsibly.” In the same session in which he compared it to an intoxicated person, Nand Mulchandani extoled its virtues as a way to find patterns in large amounts of data, and as a perspective-changer that can overcome conceptual blindness in experts immersed in a problem.

It “can give you something so far outside of your range,” he said, “that it really opens up the vista in terms of where you’re going to go.”

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Biometrics race for the borders

Biometrics to ease border crossings are a major theme of the week among Biometric Update’s most-read articles of the week….

 

US election likely to be a missed opportunity to advance digital ID policy

The 2024 U.S. election represents an opportunity for social dialogue around digital identity policy in the wake of a series…

 

India to pilot Digi Yatra for foreign nationals in 2025

India is planning an international pilot project for June 2025 that will see the introduction of facial recognition technology beyond…

 

Papua New Guinea advances digital ID, wallet and govt platform to pilot

Papua New Guinea has stood up a new digital ID, wallet and online government platform, and plans to pilot them…

 

UK police organized crime unit seeks new facial recognition software

The UK’s main law enforcement agency against organized crime is looking into new facial recognition solutions, as the country doubles…

 

The EUDI Wallet was not meant for age assurance: AVPA

The European Union should not look at the EU Digital Identity (EUDI) Wallet as an age-assurance solution to keep minors…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events