2024 holds more talk, more hope, more finger-pointing on AI regulation
At least in its earliest days, 2024 looks like it will be the success that last year was in terms of taming AI globally. That is to say, it will fail the world.
AI in general and biometric privacy in particular continues to primarily be a talking point. Talking heads talking about how there’s no global government to impose regulations. Virtually all national governments are too inept, political and/or duplicitous to handle their own mess. And businesses are taking advantage of the vacuum or slow-walking when they could be running with code innovation.
Here are four recent, notable talking points.
Some in the AI community say the World Economic Forum should step up. That its fabulously wealthy industry leaders and celebrities should convince world leaders that runaway facial recognition, for instance, is bad.
The problem is three of the most secretive groups of people on the planet are executives, world leaders and wealthy celebrities. (That’s more or less the reservation categories for the exclusive WEF Davos gathering.)
An article in Swiss news publisher SWI notes that, probably, powerful insiders are the only hope for steering powerful insiders. A digital ethicist Niniane Paeffgen speaking to SWI thinks it’s worth a try.
Thinking along the same lines in a Bloomberg video interview, University of Toronto Law Professor Gillian Hadfield said that “high-level attention” was lavished on the potential dangers of AI last year. This year should be when the European Union, United States and Canada begin acting.
Hadfield feels the world needs “a system that gives our governments greater visibility” into the whole of AI. She thinks there’s a “mechanism” to doing that, like creating a registry. Then, make it illegal to use or buy an unregistered model.
The idea rises on the likelihood that all registered models are fully disclosed and that governments can act separately from wealthy business executives.
A vendor opinion in The Hill adds users to those who are responsible for their becoming victims of AI-related privacy crimes. The day when face and iris biometric templates are leaks cannot be far off.
Scott Allendevaux, of data-protection agency Allendevaux & Co., says people should take steps including VPNs, even though VPNs ultimately have limited value. Encrypted messaging apps should be used by those most likely to be victimized by hackers.
What he says he’s seeing so far are “baby steps.”
Finally, and perhaps most important, is the savaging of the provisional EU AI Act’s facial surveillance provisions for being too weak.
“Even the publicly announced limitation of the controversial facial recognition technology to the prosecution of serious criminal offences has since fallen,” writes MEP of the European Parliament Patrick Breyer, a Pirate Party member.
“With this AI law,” he writes, “it appears the EU intends to compete with China not only technologically but also in terms of high-tech repression.” Even surveillance of political demonstrators is not excluded, Breyer argues.
The latest changes proposed to the Act’s Article 29 would also give authorities 48 hours instead of 24 to seek judicial or administrative authority to use forensic facial recognition.
If not exactly negotiated in utter secrecy, some, it appears, feel EU leaders did not wave a flag as they change the legislation in this regard.