EU ban on ‘unacceptable’ AI comes into force with crucial details unresolved

The European Union introduced a ban on AI practices deemed to pose “unacceptable risks” on Sunday as part of its AI Act rollout. But although the rulebook officially restricts many controversial AI uses, crucial details about how the legislation will work in real life are still missing. All eyes are now on the European AI Office which is scheduled to release guidelines on implementation.
Policy experts have been questioning whether guidelines for AI Act compliance can be completed on time while rights organizations fighting national security exemptions for AI tools. Meanwhile, the industry says that it needs clear and sensible interpretations of the AI Act in order to stay ahead of competitors in the global AI race.
“On AI ‘high-risk’ classifications and on data-sharing, we need urgent clarity in the most affected sectors,” business group Digital Europe warned last week.
EU AI Office struggles with AI Act guidelines
From February 2nd, 2025, unacceptable AI technologies – including live facial recognition in public spaces, untargeted scaping of facial images from the internet or CCTV footage, and biometric-based categorization that infer sensitive information – are prohibited. Companies face fines of up to 35 million euros (US$36.1 million) or 7 percent of their total annual turnover for violations.
However, the AI Office is facing deep divisions in formulating the Code of Practice which outlines practical guidelines for compliance for organizations, according to an analysis by the Center for European Policy Analysis (CEPA).
The Office, tasked with enforcing the AI Act, is currently consulting with around 1,000 stakeholders on the Code including businesses, national authorities, academic researchers and civil society. The document is due to be finalized in April – but stakeholders warn that the timeline is too short.
The Code’s third draft is expected to be released on February 17. The draft is already seeing contention over the role of external evaluators and third-party assessors for AI training and over the transparency of training data. Claims that the AI Office is “massively understaffed” made in December by German European Parliament member Axel Voss are also shaking up confidence that the guidelines will be ready on time.
The Office still has the possibility of setting the rules by itself if the Code is not finalized by August. But as the Code is not mandatory for companies it may lose its legitimacy, warns Laura Caroli, senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS).
“If the overall process happens ‘against’ industry instead of being co-led, owned, and shepherded by it, the code will have no legitimacy and will remain a void and purposeless exercise,” writes Caroli.
Rights organizations fight against AI Act loopholes
The AI Act has left a number of carve-outs for law enforcement agencies and border control to use banned AI tools, including the use of live facial recognition in cases of serious crimes such as terrorism. The loopholes have led to criticism from rights groups, which claim they water down protections against potential abuse. These right groups are now turning to public opinion and local courts to prove their support for their position.
In Belgium, rights organizations Ligue des droits humains (LDH) and Liga voor Mensenrechten are calling for a complete ban on real-time facial recognition in the country.
The parameters for the AI Act exemptions are “very vague” and could apply to a wide range of criminal offenses. These broad definitions risk expanding the scope of the use of facial recognition, including for population tracking and surveilling marginalized and criminalized groups, the organization says according to Brussels Times.
In France, the fight against AI surveillance tools is already gaining ground. The administrative court in Grenoble ordered the city to immediately stop using video analysis software from Briefcam after a case was brought by civil organization La Quadrature du Net.
“This ruling is an unprecedented victory in our fight against algorithmic video surveillance,” the group said in an announcement last week. “In cities such as Saint-Denis, Reims and Brest that chose to implement this type of algorithmic surveillance, people can now legitimately call for it to be stopped immediately.”
The French police have been using Briefcam’s tools to track or find people based on their appearance, clothing, gender and other characteristics. The company also provides the option for deploying facial recognition.
In December, French data privacy regulator CNIL concluded the Ministry of the Interior did not use the software to analyze video in real-time nor use real-time facial recognition in public spaces. The government agency, however, issued six formal notices to municipal police over regulatory breaches. The Grenoble’s court decision, however, contradicts CNIL by stating that it involves disproportionate processing of personal data, says La Quadrature du Net.
In January, a group of more than 20 civil rights organizations, including Digital Rights (EDRi), AlgorithmWatch and Amnesty International, signed a letter urging the European Commission to prioritize human rights in guidelines for implementing the AI Act.
Article Topics
AI Act | biometric identification | biometrics | BriefCam | EU | Europe | facial recognition | legislation | video surveillance
Comments