More talk about ethical AI but it’s still mostly just talk

In a rare conjunction, February has seen formal global, national, regional and intensely local discussions, independent of each other, about how best to use AI, including biometrics, without losing control of AI.
Officials with 60 nations converged on The Hague last week to sign a non-binding “call to action” to be responsible when adding AI to nations’ militaries.
News agency Reuters described the outcome of the world’s first international military AI conference as modest. The summit, called Responsible AI in the Military, invited governments, businesspeople, universities and international organizations.
An apparent lack of enthusiasm — despite the United States and China signing the resulting joint statement – might reflect Russia not being invited and Israel not signing, reports Israeli news and culture publication Ynetnews.
It might also be the lack of commitment by those attending the Netherlands confab. Or the fact that to get as many signatures as were gathered, organizers had to throw overboard important topics like so-called slaughterbots that are capable of killing without human intervention.
Among the meatier disagreements on display at the conference involved the U.S. and China.
U.S. representatives feel the world should adopt a military AI scheme that its scientists have created, while China want the United Nations to create a framework. The U.S. military always tries to avoid situations where its hands are tied and China wants rules that do not play to the U.S.’ strengths.
More oversight needed
The state of New York, meanwhile, is rapping the knuckles of New York City over alleged insufficient oversight of AI programs, including biometric surveillance. The state comptroller’s office says formal use guidelines are just the beginning of what the city needs.
New York City officials need to create a “clear inventory” of the tools it uses and why, and standards for accuracy, according to the agency. The city must make solving child abuse cases with facial recognition more of a priority than it is now.
Four city agencies were examined by the comptroller’s office: Children’s Services, Education, police and the Department of Buildings. “Significant shortfalls in oversight and risk assessment” of AI reportedly were found.
Police policies for facial recognition systems, according to the office, for example, are not customized to that particular kind of surveillance. Instead, they are in the department’s overall surveillance guidelines even though there are sizable differences in how each is used.
And then there is the focus on how coders and teams in AI need to fight bias in algorithms.
A marketing article published by events firm Silicon Republic looks at the need for ethical, bias-free AI in facial recognition, computer vision and chatbots. If keen attention is not paid at the onset of a project, problems deeper than even racial bias can work their way to the surface.
For example, a recommendation algorithm might suggest self-harm content, according to the article.
Article Topics
AI | biometrics | facial recognition | legislation | military | regulation | surveillance
Comments