EU Parliament backs delaying AI Act deadlines for biometrics, high-risk applications

The European Parliament has voted in favor of delaying certain rules of the EU AI Act related to high-risk AI applications, including those involving biometrics. Members also backed a ban on “nudifier” apps that create explicit images of a real person without their consent.
The digital omnibus package on simplification was adopted by a large majority on Thursday, paving the way for negotiations on the final law with the European Council. According to the proposal, compliance deadlines for developers of high-risk AI systems would be pushed from August to December 2nd, 2027.
The delay proposal comes after the European Commission missed its February deadline to publish guidelines on high-risk AI systems under the EU’s AI Act. The guidelines are meant to provide clarity to companies that provide and deploy these systems.
“Companies now need clarity on whether they are high risk or not,” Arba Kokalari, co-rapporteur for the Internal Market and Consumer Protection committee, said earlier this month. “If Europe wants to be competitive, we must increase investment and make it easier to use AI, not punish companies who introduce innovative AI features in safe products.”
Members of the Parliament (MEPs) also supported allowing service providers to process special categories of personal data, such as biometrics, for bias detection – but only when “strictly necessary.” The rule would not only apply to providers and deployers of high-risk systems but also to other AI systems and models.
High-risk AI applications include those in the fields of biometrics, critical infrastructure, law enforcement, essential services, employment, and the administration of justice and democratic processes. Among the systems that could be classed as “high-risk” are biometric identification or categorization, as well as emotion recognition.
Sexualized deepfakes head for ban
AI-generated content would also be subject to updated rules. The proliferation of nudification apps has created an urgent need for explicit regulatory prohibition, the proposal says.
The ban proposal comes following outrage over images that feature digitally undressed women, including minors, created by X’s AI chatbot Grok. The platform, owned by Elon Musk, was placed under investigation by the European Commission in January.
The ban, however, should not prevent AI providers from developing their technical capabilities to generate images or videos, according to the document. Providers are still required to watermark AI-generated content, but the rule’s application is to be delayed until November 2026.
The digital omnibus package on simplification also suggests a longer compliance deadline for companies developing AI systems covered by sector-specific safety rules, such as toys or medical devices, setting it at August 2028.
To help stimulate EU businesses, the document recommends extending support measures to include both small and medium-sized enterprises (SME) and small mid-cap enterprises (SMCs).
The seventh omnibus package on simplification was put forward by the European Commission in November last year. The EU is also currently deliberating on other proposals in the package, including those related to data and the establishment of European business wallets.
The digital simplification plan has been backed by countries such as France and Germany, as well as technology companies and lobbying groups.
Article Topics
AI Act | biometrics | deepfakes | EU | Europe | legislation





Comments