2 paths for AI regulation – complex and mandatory or focused and voluntary
While anxieties rise in the European Union over 11th hour negation of the proposed AI Act, a motley group of government cybersecurity agencies has proposed bumpers for algorithm writers.
The voluntary regulations are one of the few things that all of the group members have in common. They all want the AI community to strive for secure designs, development, deployments and operation.
Specifically, the document asks the community to take ownership of security for buyers and embrace radical transparency and accountability. It also calls for creating organizations that view security as a “top business priority.”
Officials from the security agencies, which are located in all four hemispheres, used as references the U.S. National Institute of Standards and Technology’s software development framework and Cybersecurity and Infrastructure Security Agency’s secure by design principles; and the UK National Cyber Security Centre’s development and deployment guidance.
Meanwhile, the EU continues to round out its AI Act, guidelines that, once enacted, will very much be mandatory. The Spanish presidency of the EU Council is trying to dislodge politicians who are unwilling to compromise.
Chief among the remaining issues still holding up the process is facial recognition.
According to the EU-funded publication Euractiv, the presidency reportedly asked everyone to accept a ban on a number of related actions, among them, face image-scraping, emotion recognition at work and school and biometric categorization for sexual orientation and religious beliefs.