EU committees propose strengthened AI Act tougher on biometric surveillance
European Union committees are weighing in on the AI Act. The EU is seen as the world’s regulatory trendsetter, making its committees covering controversial issues something of a target for activists. A year ago, it announced landmark proposals for regulating artificial intelligence with the AI Act, the first in the world on this scale.
The Act is intended to govern the development and deployment of artificial intelligence to safeguard citizens while ensuring the technology would be seen as trustworthy. It categorizes use of AI in four bands of risk, with regulation to match, including hefty fines for companies which break the rules. Biometric technologies such as facial recognition in public places are high-risk, one step down from the “unacceptable” category.
Campaign groups are working hard to influence the decision-making of the bloc as the European Parliament’s two special interest committees covering its remit release their proposed amendments to the act.
The lead rapporteurs of the Committee on the Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) have added details to the act, such as broadening biometrics to include behavior, while deleting other parts such as provision to use AI for real-time remote biometric identification in public for law enforcement.
Changes would strengthen the protection of personal data, such as AI relying on large databases for training and deployment.
“AI systems used by law enforcement authorities or on their behalf to predict the probability of a natural person to offend or to reoffend, based on profiling and individual risk-assessment hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence,” states an addition. “Such AI systems should therefore be prohibited.”
Campaign groups apply pressure
Mozilla and the European Digital Rights (EDRi) campaign group are both seeking to further influence the development of the AI Act.
The Mozilla Foundation is pushing for “allocating responsibility for high-risk AI systems along the AI supply chain; making the public AI database a bedrock of transparency and effective oversight from regulators and the public at large and giving people and communities the means to take action when harmed.”
EDRi is encouraging something more of a grassroots approach. They state that IMCO and LIBE are working on reports up to October 2022, beyond this initial draft from the rapporteurs of the committees. This is now the stage where MEPs in the committees can propose their own amendments. There are then rounds of negotiation until October.
EDRi says this is the crucial time, while Parliament’s committees discuss their own amendments as the next stage, the “trilogues” between Parliament, council and Commission “are notorious for their opacity and lack of opportunities for public scrutiny.” Only the Parliament is elected and therefore more likely to be in touch with the will of the people.