Proposed AI Act amendments would add biometric data to ‘high-risk’ criteria
A tenth set of amendments to the EU AI Act have been proposed by the co-rapporteurs of the European Parliament, Euractiv reports, which focusses on the criteria for classifying AI systems as “high-risk.”
“High-risk” systems as currently defined include public facial recognition and some other biometric systems, and are subject to the most rigorous rules for deployment of any allowable deployment type.
High-risk classification occurs when an AI system is covered by the safety rules of the EU harmonization legislation, as toys and machinery are, or if the use cases is listed under Annex III.
The amendments recommended by MEPs Brando Benifei and Dragoș Tudorache added the requirement for “high-risk” classification that the system must have a specific purpose, which would exclude general-purpose AI.
Deployment for the purposes listed in Annex III also would require a second test to meet the “high-risk” threshold. Systems receiving personal or biometric information as input or affecting people’s health, safety or fundamental rights can be counted as high-risk.
The amendments also give the European Commission the power to amend the specific use cases in Annex III. The original text grants this power only to the EU executive.
They also add system functionalities that go beyond the intended purpose potential for misuse, the nature and volume of data being processed as conditions for assessing new risks. Other conditions for assessing risks included are the level of system autonomy, discrimination risk, availability of mitigating measures, potential benefits and the effectiveness of human oversight.
The Commission would consult with the AI Office, and the office and national authorities can challenge the Commission’s decisions. The EU executive would then reassess the decision and publish its rationale.