New compromise on AI Act redefines biometric recognition conditions

The Czech Presidency of the European Union Council suggested a new compromise on the Artificial Intelligence Act Wednesday, which reintroduces a previously-removed concept to avoid excess limitations on fingerprint biometrics, European news outlet Euractiv reports.
Expected to be the basis for a final agreement next month, the compromise represents the fourth iteration of the AI Act.
The document will reportedly be discussed at the EU Council’s Telecom Working Party October 25 and could be approved by mid-November.
The most substantial changes, when compared to the third version of the text, expands exemptions allowing military, defense and national security staff to use any AI algorithms, not only those created for the private sector.
A new exemption also has been added for people using AI for non-professional purposes, which would otherwise fall outside the original scope of the AI regulation except for the transparency obligations.
Concerning biometric identification systems, the new text re-introduces the definition of “remote” AI use. It had been removed by the Slovenian Presidency last year because it was deemed confusing.
Its return came as member states realized that fingerprints would fall under the scope of the AI Act.
The definition of remote now requires two conditions: that the system is used from a distance and that the identification occurs without the person’s active involvement.
The transparency obligations for specific AI applications like deepfakes have also been modified to “not impede the right to freedom of the arts, in particular where the content is part of an evidently creative, satirical, artistic or fictional work or programme.”
Infringement of these obligations and requirements for general-purpose AI has been added to the list of violations that could spur fines of up to €20 million (roughly US$19.65 million) or 4 percent of annual turnover.
Other changes regard transparency obligations for creators of systems susceptible to causing significant harm, including the expected output when appropriate and constraining leeway for law enforcement.
In fact, the latest text of the AI Act extends the exemption to the four-eye principle (two people should verify high-risk decisions) and exempts public authorities using high-risk systems in law enforcement, migration, asylum and border control, and critical infrastructure from registering on the EU database.
Further, the AI Board, which will be formed by representatives from the member states, is now mandatory. The board will have to support market surveillance “in particular as regards the emergence of risks of systemic nature that may stem from AI systems.”
Article Topics
AI | AI Act | biometric identification | biometrics | data privacy | EU | legislation | surveillance

Comments