FB pixel

Money isn’t enough for the US lead in AI. Win popular support with transparency — council


biometric digital identity verification for fraud prevention

Start thinking about public support for biometrics and AI the same way you think of funding for strategic research. That is the message to the federal government from an advisory group dominated by some of the most AI-attuned industry leaders in the United States.

A major draft report by the National Security Commission on Artificial Intelligence advises policy makers to abandon the way government has nurtured and protected critical technology — specifically, AI. Facial recognition is a significant focus within AI, and it is already being watched closely by the public.

It appears that no aspects of making the United States the dominant force in this arena has been missed, making the document required reading. However, the focus shown on winning and keeping popular support for AI initiatives is compelling.

The nations that can master the production and use of AI will be the elite in terms of innovation, intellectual prowess and economic force projection. And those who press the algorithms into military use will be better able to tell the United States where to get off.

If U.S. political leaders want to maintain unequaled global influence, according to the commission, they must be prepared to fund development on a massive scale (as Chinese authoritarian kingpins are doing).

More collaboration between government and industry is needed, the report’s authors say.  And the nation has to recommit itself to being the place bright minds from around the world come to innovate.

Most notable, however, is the commission’s call to make all stages and forms of AI transparent. The transparency must lead to AI outcomes that respect people’s privacy and civil liberties. And where AI fails society in these ways, there must be easily accessible avenues of commensurate redress.

The body is led by former Google CEO Eric Schmidt with help from executives at, among others, Oracle, Microsoft and Amazon (namely, that company’s next CEO, Andy Jassy).

They note common public concerns about facial recognition and other biometrics potentially being used to invade their privacy, or reinforcing bias and discrimination.

If voters do not trust AI, as is increasingly the case with face biometrics, they are not going to support the significant outlays needed to implement algorithms in the service of the U.S. military, State Department, intelligence services and global trade.

At home, citizens will reject AI initiatives designed for national roles for which the software, if carefully crafted and managed, could make noticeable improvements: policing, agriculture, aerospace, manufacturing, basic research, health care, environmental protection — an almost endless list.

Dictatorial regimes like China and Russia do not accept that popular support for AI is anything less than overwhelming, so they can focus on marshaling resources.

And the fact is, the United States and other democracies historically have taken for granted that their populations supported their Big Ideas. The start of the moon program is one example. The expensive and existentially dangerous programs building new nuclear weapons through the 1980s is another.

Both faded rapidly when the U.S. government could no longer inspire imaginations (or gin up support).

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

ID16.9 Podcast

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics