FB pixel

NIST pushes framework to foster trusted AI

Categories Biometric R&D  |  Biometrics News
NIST pushes framework to foster trusted AI
 

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has published new guidance to aid organizations in the design, development, deployment and use of artificial intelligence (AI).

As America’s leading authority on biometrics standards and benchmarking, NIST’s guidance on trustworthy AI could have a major influence on the industry.

According to the Artificial Intelligence Risk Management Framework (AI RMF 1.0), AI technologies can benefit society, but their potential harms cannot be ignored. Risks connected with AI include biases that can affect people’s lives in several ways, from negative experiences with chatbots to rejections on job and loan applications.

“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” comments Commerce Deputy Secretary Don Graves.

“It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.”

More specifically, the document comprises two sections. The first one discusses the aforementioned risks connected to AI and suggests a list of characteristics of “trustworthy” AI.

On the other hand, the second half of the report highlights four core functions to help businesses address the risks of real-world AI applications. These are “govern, map, measure and manage,” respectively.

The “govern” function looks at establishing a “culture of risk management” that is “cultivated and present” and the “map” one aims to recognize the context of specific AI applications and associated risks.

The “measure” function aims to ensure that identified risks are assessed, analyzed or tracked and the “manage” one refers to prioritizing risks and ensuring they are acted upon based on project impact.

“The AI Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their AI risk management approaches,” explains NIST Director Laurie Locascio.

“It offers a new way to integrate responsible practices and actionable guidance to operationalize trustworthy and responsible AI. We expect the AI RMF to help drive development of best practices and standards.”

The AI RMF 1.0 was mandated in a Congressional directive from January 2021. It includes 400 sets of formal comments NIST received from around 240 organizations in the drafting stages of the framework.

NIST has also published an AI RMF Playbook, a series of guidelines to help navigate and implement the framework.

The agency plans to collaborate with the AI community to update the framework regularly. NIST also said it would establish resources for trustworthy and responsible AI.

Meanwhile, in Europe, legislators are also working toward the creation of an artificial intelligence-focused regulatory framework designed to foster the deployment of ethical AI.

Commonly known as the AI Act, the proposed legislation is in the process of being amended to include a more accurate definition of what remote biometric identification is and what can be defined as a high-risk criterion for the classification of AI systems.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Opinions on UK Online Safety Act emphasize importance of enforcement

Online safety legislation is making headlines around the world. But in places where laws have taken effect, are they proving…

 

UK Home Office raises estimate for passport contract to 12 years, £576M

The UK Home Office has opened a third round of market engagement for its next major passport manufacturing and personalization…

 

US lawmakers move to restrict AI chatbots used by kids

A bipartisan pair of House and Senate bills would impose new federal restrictions on AI chatbots, including a ban on…

 

Utah age assurance law for VPN users takes effect this week

Privacy advocates and virtual private network (VPN) providers are up in arms over Utah’s Senate Bill 73 (SB 73), “Online…

 

CLR Labs wins ISO 17025 accreditation for biometrics testing across EU

Cabinet Louis Reynaud (CLR Labs) has been accredited for ISO/IEC 17025, the international standard for testing and calibration laboratories, in…

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events