FB pixel

More talk about ethical AI but it’s still mostly just talk

More talk about ethical AI but it’s still mostly just talk
 

In a rare conjunction, February has seen formal global, national, regional and intensely local discussions, independent of each other, about how best to use AI, including biometrics, without losing control of AI.

Officials with 60 nations converged on The Hague last week to sign a non-binding “call to action” to be responsible when adding AI to nations’ militaries.

News agency Reuters described the outcome of the world’s first international military AI conference as modest. The summit, called Responsible AI in the Military, invited governments, businesspeople, universities and international organizations.

An apparent lack of enthusiasm — despite the United States and China signing the resulting joint statement – might reflect Russia not being invited and Israel not signing, reports Israeli news and culture publication Ynetnews.

It might also be the lack of commitment by those attending the Netherlands confab. Or the fact that to get as many signatures as were gathered, organizers had to throw overboard important topics like so-called slaughterbots that are capable of killing without human intervention.

Among the meatier disagreements on display at the conference involved the U.S. and China.

U.S. representatives feel the world should adopt a military AI scheme that its scientists have created, while China want the United Nations to create a framework. The U.S. military always tries to avoid situations where its hands are tied and China wants rules that do not play to the U.S.’ strengths.

More oversight needed

The state of New York, meanwhile, is rapping the knuckles of New York City over alleged insufficient oversight of AI programs, including biometric surveillance. The state comptroller’s office says formal use guidelines are just the beginning of what the city needs.

New York City officials need to create a “clear inventory” of the tools it uses and why, and standards for accuracy, according to the agency. The city must make solving child abuse cases with facial recognition more of a priority than it is now.

Four city agencies were examined by the comptroller’s office: Children’s Services, Education, police and the Department of Buildings. “Significant shortfalls in oversight and risk assessment” of AI reportedly were found.

Police policies for facial recognition systems, according to the office, for example, are not customized to that particular kind of surveillance. Instead, they are in the department’s overall surveillance guidelines even though there are sizable differences in how each is used.

And then there is the focus on how coders and teams in AI need to fight bias in algorithms.

A marketing article published by events firm Silicon Republic looks at the need for ethical, bias-free AI in facial recognition, computer vision and chatbots. If keen attention is not paid at the onset of a project, problems deeper than even racial bias can work their way to the surface.

For example, a recommendation algorithm might suggest self-harm content, according to the article.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

UK Home Office, police body resist biometrics transparency

UK airport passengers have been secretly checked while boarding aircraft by face biometric scanning cameras under a scheme backed by…

 

Wallet concepts perplex non-crypto users, discouraging adoption of Web3: Civic

A new insight report from Civic Technologies suggests that current wallet-based authentication may be slowing mainstream adoption. The “State of…

 

Corsight first in facial recognition certified to ISO AI explainability standard

Corsight AI is the first facial recognition provider in the world to be certified for compliance to the new international…

 

Poor quality images hold Scottish police facial recognition matches to 2%

Scottish Biometrics Commissioner Dr. Brian Plastow warns that a significant number of the custody images held in national police databases…

 

Call me Fake Ishmael: for executives, deepfakes present a gargantuan problem

Anyone who’s read the classic Moby Dick knows that whales are hard to catch – but if you nab one,…

 

Knee-deep in biometric oversight, Australia’s OAIC juggles age assurance, digital ID, FRT

Australia is continuing its push to beef up online privacy and safety laws with a strategy focused on biometrics, having…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events