FB pixel

DHS maps its goals and next actions for safe AI

DHS maps its goals and next actions for safe AI
 

The U.S. government has posted an AI roadmap for the Department of Homeland Security, outlining algorithm initiatives and how they will protect Americans.

DHS officials say theirs is the most detailed such document of any federal agency. That may be, but what is certain is that the document hits most of the must-dos for any organization hoping to create ongoing innovation and responsible use.

There’s even a couple surprises in the 24-page document. It bears reading because Homeland Security will be a heavy buyer and builder of AI, including biometric recognition. One of the examples provided of how the Department currently uses AI is the use of biometrics for touchless air passenger processing by the Transportation Security Administration, as with TSA’s CAT2 scanners.

The trick will be getting a sustained majority of Congress willing to fund and prod DHS leaders.

DHS’ roadmap closely follows the White House’s executive order 14110, directing the bureaucracy to develop trustworthy, safe and secure AI. The order also demands that administrators use it responsibly for the benefit of the nation.

The roadmap is broken into many nested goals and intentions, starting with so-called lines of effort: leverage AI responsibly within DHS, promote AI safety and security nationwide and lead in the AI field through cohesive partnerships.

Many and maybe the most points made under those headings are just commonsense – and that’s not a criticism. Steps needed to create reliable and trusted AI are still not universally known much less implemented in any level of government or among private enterprises and researchers.

The not-exactly-simple messages can’t be over-communicated.

Officials have pledged, for instance, to create a DHS AI sandbox and to do it with the government’s privacy, civil rights and civil liberties offices, and the White House’s general counsel. They want it completed this year.

Along the same lines, DHS leaders want their Science & Technology Directorate to create a federated AI testbed that will give the government independent assessments of systems. This one has a five-year deadline.

DHS also is calling for safe, secure and trusted AI use through “robust governance” and oversight. If the goal is to avoid people feeling subjected to AI, transparency and accountability are needed. But as with all organizations involved in AI have signed on to this idea, they have to go beyond good intentions.

A 2024 goal to improve governance and oversight is forthcoming guidance on AI security from the Cybersecurity and Infrastructure Security Agency. The guidance will be composed with help from NIST, along with best practices and guidance on red-team cybersecurity tests for AI systems.

One of the surprises in the document is the goal to eradicate child sexual abuse material. That’s not a core mission for DHS to most people until they consider the society-wide, daily economic costs of abuse that continues for each victim’s complete life.

The roadmap doesn’t state that AI will be used, but that’s an eventuality. There’s no realistic way to address this situation without it.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Biometrics adoption strategies benefit when government direction is clear

Biometrics providers have major growth opportunities ahead where there is clarity about their role. What part governments play in digital…

 

Biometric Update Podcast digs into deepfakes with Pindrop CEO

Deepfakes are one of the biggest issues of our age. But while video deepfakes get the most attention, audio deepfakes…

 

Know your geography for successful digital ID adoption: Trinsic

A big year for digital identity issuance, adoption and regulation has widened the opportunities for businesses around the world to…

 

UK’s digital ID trust problem now between business and government

It used to be that the UK public’s trust in the government was a barrier to the establishment of a…

 

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few…

 

Age assurance regulations push sites to weigh risks and explore options for compliance

Online age assurance laws have taken effect in certain jurisdictions, prompting platforms to look carefully at what they’re liable for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events