EU preparing tough measures for AI, while UK and US take a slow approach
The European Parliament is preparing tough new measures over the use of artificial intelligence, including facial recognition, as it edges towards agreement on the EU’s Artificial Intelligence Act, according to a report by Financial Times citing people familiar with the matter. Members of the European Parliament (MEPs) are hoping to finalize their position next week with proposals expected to be ratified by next month.
The MEPs have proposed a total ban on the use of facial recognition in public spaces under any circumstances. EU member states, however, are expected to push back against a complete ban on biometrics thanks to pressure from their local police forces, the report says.
Other proposals include stricter rules for copyrighted material used to train AI for chatbots and image-generating tools. The move comes after Stability AI, the creator of Stable Diffusion, was hit by copyright lawsuits from artists and visual media company Getty Images.
ChatGTP and similar products belonging to the General Purpose Artificial Intelligence (GPAI) category have brought new challenges to European efforts to formalize the AI Act. Once MEPs reach agreement, broader negotiations over the AI Act are set to begin with the final draft of the law expected to be passed before the end of the current European Parliament term in 2024.
The EU is hoping its AI Act will fulfill the same foundational role for global AI regulation as its General Data Protection Regulation (GDPR) which became a model for similar policies in countries such as Japan and Brazil. But other countries including the U.S. and the UK are on the path of formulating their own AI regulation.
The National Telecommunications and Information Administration (NTIA), an office housed within the U.S. Department of Commerce, announced last week that it is seeking public comment on using AI in business while mitigating harm.
Referencing guidance from the National Institutes of Standards and Technology (NIST), NTIA asked commenters about the purpose of AI accountability mechanisms such as audits and certifications, the goals of a trustworthy AI system and the risk of systemic bias when handling sensitive human data, among others.
The request is part of a growing federal push to regulate AI systems in the public and private sectors.
“Our initiative will help build an ecosystem of AI audits, assessments and other mechanisms to help assure businesses and the public that AI systems can be trusted,” said NTIA Assistant Secretary of Communications Alan Davidson. “This, in turn, will feed into the broader Commerce Department and Biden administration work on AI.”
In the UK, Secretary of State for Science, Innovation and Technology Michelle Donelan introduced a new white paper in March detailing how the country can become an AI superpower while providing a framework to address risks. The pro-business approach aims to incentivize companies based overseas to establish a presence in the UK.
The country does not intend to introduce new legislation as legislating too early would risk placing undue burdens on businesses, Donelan said in a statement. Instead, the UK aims to set up its own regulatory approach independent of the EU, including a regulatory sandbox for AI that would bring regulators and innovators together to help them get their products to market.
The white paper outlined the government’s support for interoperability across different regulatory regimes.
“A heavy-handed and rigid approach can stifle innovation and slow AI adoption,” Donelan said. “That is why we set out a proportionate and pro-innovation regulatory framework.”