Everyone has an opinion about AI policies and regulations
The European Union’s plenary vote on the AI Act is June 14, but a political skirmish around remote biometric surveillance has derailed the European Parliament’s unity on exceptions in the legislation, says Euractiv.
The parliament’s four main political parties agreed in April not to table amendments. But the center-right European People’s Party (EPP) was granted an exception that allows flexibility on issues regarding remote biometric identification. That has caused friction with the other parties, which accuse the EPP of breaking the deal and pushing changes that put marginalized communities at higher risk and could compromise individual privacy.
Euractiv says it talked to an unnamed European parliamentary official who said the EPP needs to take responsibility for how they have exploited the flexibility granted to them on remote biometrics. “The agreement was no group amendments,” the source reportedly said, referring to how the EPP’s actions put voting out of reach of the other voting blocs. Fearing a domino effect among other groups seeking changes to the text, the official said that now, “anything is possible.”
A representative from the EPP claims the issue is fake news. The flexibility gives them the right to table the amendments. The proposed amendments amount to a reversal of a previous compromise banning live use of remote biometric identification in public spaces, the source reportedly said. The EPP’s changes expand the allowable use cases beyond jury-approved investigations of serious crimes to include locating a missing person and a suspect of a serious crime – and most controversial, preventing terrorist attacks.
Although the EPP’s text mentions preapprovals and safeguards, it contrasts with the left-leaning parties’ preference for a total ban on remote biometric identification.
Opposition calls foul on prevention ‘myths’
One issue facing lawmakers is how to define and classify the risk and benefit of various AI applications. In response to the EPP’s amendments, Patrick Breyer, a member of parliament representing the Pirate Party, tabled a coalition-driven counter-amendment to ban any AI system billed as performing behavioral analysis.
In a post on his site, Breyer argues that “contrary to a conservative myth, there is not a single example of biometric real-time surveillance ever having prevented a terrorist attack or other events of this kind.
“The EPP’s proposed ‘exceptions’ to the ban would, in fact, justify the pervasive deployment of facial surveillance technology,” he writes. “We must not normalize a culture of mistrust and side with authoritarian regimes that use AI to suppress civil society!”
Additional amendments suggested by members of parliament seek bans on predictive policing, AI threat assessments on migrants and AI forecasts of movement across borders.
UK looks for its own agreement on regulation
Across the English Channel, United Kingdom regulators likewise continue to circle and sniff at AI regulations. There is talk of an AI summit among the UK home nations, reports Futurescot.
In a letter to Innovation Minister Richard Lochhead, Scotland’s biometrics commissioner, Brian Plastow, has argued for “coherent UK thinking” on the “acute ethical challenges” presented by AI.
“Without legislation, regulation, and effective independent oversight, businesses, consumers and the public sector may be nervous about adopting AI,” Plastow wrote. It could also lead to “unethical experimentation. Therefore, building public confidence and trust is essential if we are to capitalise on the many benefits that AI has to offer.”
Plastow rejects a total ban like the one being discussed in the EU, arguing instead that remote biometric identification can be useful to law enforcement.
Ontario’s privacy commissioner challenges govt to step up on AI
Canada’s largest province is also joining the conversation, with the release of the privacy commissioner’s annual report. The document urges Ontario’s typically regulation-averse government to show leadership on the use of AI in the public sector.
In a release, Privacy Commissioner Patricia Kosseim urged the government to continue “pressing forward with a robust, granular and binding framework for the responsible use of artificial intelligence technologies by public sector organizations.”
“Clear and effective guardrails are needed to ensure the benefits of AI do not come at the cost of Ontarians’ privacy and other fundamental human rights,” said Kosseim. “Ontarians might want their public sector institutions to deploy AI technologies for the public good, but only if it is safe, transparent, accountable, and ethically responsible.” Echoing Scotland’s Plastow, she emphasized that “innovative uses of AI must be supported and sustained by public trust.”
The report also called on the government to make a potential digital ID system in Ontario both optional and equitably accessible, and to place strong restrictions and security measures in place to minimize privacy risk.
Venture capitalist says to calm down about AI, urges innovation
While some of the regulatory has come from some tech elites warning about the existential risks AI presents, Silicon Valley billionaire Marc Andreessen, who coded the early Mosaic browser and co-founded Netscape, urges calm. People are needlessly “freaking out.”
He has even accused fellow tech gurus of trying to rig public policy on AI to their advantage. Their goal, Andreessen has said, is “a cartel of government-blessed AI vendors protected from new startup and open-source competition.”
Published on Andreessen’s Substack feed, the 7,000-word rejoinder to “AI doomers,” Why AI Will Save the World, lays out the mechanical and operational facts of AI as he sees them, arguing that it is no threat.
“AI is a computer program like any other,” writes Andreessen. “It runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.”
Calling the current climate one of “full-blown moral panic about AI,” he says that public concerns are “already being used as a motivating force by a variety of actors to demand policy action – new AI restrictions, regulations, and laws.” Andreessen says this is choking innovation
Andreessen is as strident in his position as are “doomers.” His post talks about how “AI can make everything we care about better” and “AI is quite possibly the most important – and best – thing our civilization has ever created.”
Andreessen sees a future in which every child has an AI tutor that “will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.”
In light of the ongoing Writers’ Guild strike in Hollywood, which is partly about the economic risk of AI to writers, Andreessen’s statement on creative work is of particular note. “The creative arts will enter a golden age,” he says, “as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.”
If, as some believe, winning public trust in AI is essential for its adoption, visions like Andreessen’s may not sway the argument – after all, recent history has many examples of cringey prophetic announcements about information technology.
However, he is likely correct when he states that “AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it.”