US and EU AI regulatory proposals under the microscope
Proposals for a framework in the U.S. and a set of laws in Europe to govern the development and use of artificial intelligence will need to carefully balance the concerns of various groups if they are to win popular support.
A blueprint for an AI bill of rights recently released by the White House outlining five key principles on which AI-based technologies should be developed and deployed has been described as the right step in the right direction in terms of protecting Americans against harms from automated systems.
While an article by The World Economic Forum (WEF) describes it as “a welcome initiative that must be rightly situated in the context of other forthcoming initiatives, both within the U.S. and elsewhere,” another write-up published by Unite.ai says the move has the potential to “shift the AI landscape” and set new standards on how AI should be built, deployed and governed.
The White House early this month published a document labeled ‘The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,’ with the goal of responding to the many harms and risks which Americans face daily with the use of technology, data and automated systems. The document is published alongside a technical companion explaining how the blueprint should be implemented.
The WEF article, authored by representatives of the Forum and the CEO of advisory firm Cantellus Group, notes that although the blueprint has also been criticized as not going far enough in its scope and that it could limit innovation in the AI space. It does however provide important protections for groups such as Black and Latino Americans who can be negatively affected by biases in AI-powered tech.
The blueprint, introduced by the Office of Science and Technology Protection (OSTP), has also been hailed as timely as it is not only expected to influence the future of AI technology development and deployment, but will also keep the United States on the frontlines of global AI regulatory action.
One of the issues the blueprint makes a strong case for is the protection of people from unsafe and ineffective systems.
The 73-page document is non-binding, meaning it is all left to businesses and state governments to abide by the prescriptions of the blueprint. Among other things, it cites examples of use cases where AI has been problematic.
The recently published blueprint for an AI bill of rights is said to be similar to the Ethics Guidelines for Trustworthy AI which the European Union Commission outlined in 2019.
EU lawmakers split over ‘toughness’ of AI regulation
In the meantime, Members of the European Parliament (MEP) are divided over whether to ensure its AI regulation gives greater room for AI innovation, or makes respect for fundamental human rights the top priority.
This split is happening at a time when rights advocates like European Digital Rights (EDRi) are warning that the bill under scrutiny must outline clear safeguards that will protect against mass surveillance and AI systems such as facial recognition that can harm privacy and entrench discrimination, The Brussels Times reports.
The EU, recall, has been working on an AI Act which is intended to regulate a wide array of AI applications in ways that align with the fundamental human rights of citizens. The body is working out the regulation on what it labels as a ‘risk-based’ approach.
Some of the lawmakers think having tougher regulation on AI use in the EU tech space could stifle innovation and scare away potential investors.
Rights groups, have for their part, also been opposed to allowing the use of facial recognition technologies in public spaces, describing them as intrusive.
One of the points of concerns, according to The Brussels Times, is that while the draft AI regulation would prohibit the use of facial recognition in real time, it allows EU member states to deploy such systems for specific purposes such as security.
Some fear this window could give room for mass surveillance under the guise of security, which is where much of the fear lies.
As debates around the regulation continue, experts believe there is need to strike a balance by having a regulation that allows room for AI innovation without any compromises on data security and human rights.