FB pixel

Single solution for regulating AI unlikely as laws require flexibility and context

EU leads the way with AI Act, but other regions prefer a sectorial approach
Categories Biometrics News
Single solution for regulating AI unlikely as laws require flexibility and context

There is no more timely topic than the state of AI regulation around the globe, which is exactly what a panel of experts discussed in a recent Learning Call hosted by the Berkman Klein Center for Internet & Society at Harvard University. Covering cases from Africa, Brazil, the U.S. and the EU, the talk examines the evolution of AI regulation from broad strokes in the early days, to the more specific laws and frameworks that are emerging as AI becomes more prevalent and more powerful. The conversation around AI and how to regulate it encompasses a multitude of topics, from intellectual property to data protection and the proper breadth and depth of legislation. AI is changing the world quickly, and while the EU has positioned its AI Act as a model and template for global regulation, nations may need – or want – to rely on existing laws that cover potential applications of AI.

In the words of Carlos Affonzo De Souza, director of the Institute of Technology and Society of Rio de Janeiro and a professor of law at Rio de Janeiro State University and the University of Ottawa Law School, “it seems like you cannot detach politics, geopolitics, the framing of law and how technology advances in this whole discussion.”

AI Act positioned as a model for the world, but may be too broad

A large part of the conversation focused on the contrast between the EU’s landmark AI Act and the more laissez-faire approach found in the U.S. Gabriele Mazzini, the architect and lead author of the EU AI Act, calls the EU’s approach “horizontal,” in that it applies across nations and contexts. The U.S., in comparison, is likely to adopt what Mason Kortz, a clinical instructor at Harvard Law School’s Cyberlaw Clinic at the Berkman Klein Center, calls a “sectoral approach,” which looks at use cases rooted in existing law, rather than aiming for a national legislation.

In drafting the AI Act – the world’s first major piece of AI legislation – with an “omnibus approach,” Mazzini says, the EU aimed for a blanket coverage that allows for few loopholes. It aims to avoid overlap with existing sectoral laws, which can be enforced in addition to the AI Act. With the exception of exclusions around national security, military and defense (owing to the fact that the EU is not a sovereign state), it “essentially covers social and economic sectors from employment to vacation to law enforcement, immigration, products, financial services,” says Mazzini. “The main idea that we put forward was the risk-based approach.”

U.S. sectoral approach mirrors debates over data protection laws

The U.S. states hate nothing more than a one-size-fits-all federal law, and Kortz says that is reflected in how the global superpower is wrangling with AI regulation. Kortz points to two things that make the U.S. position unique: “one is that to date, what implementation there has been at the federal level has been almost entirely through executive agencies in the US.” Kortz says the U.S. government has “a pretty strong administrative state and a pretty recalcitrant legislative arm. And so I think those dynamics are what led the White House to issue the executive order in October 2023, directing specific executive agencies to take on parts of this.”

Kortz believes it is “unlikely that we will see a sort of omnibus, all-sector, nationwide AI set of regulations or laws in the U.S. in the near future.” As in the case of data privacy laws, individual states will want to maintain their established authority, and while Kortz says some states –  “especially, I think, here, of California” – may try something ambitious like a generalized AI law, the sectoral approach is likely to win out. “So, employment and housing and urban development will govern AI in the context of housing. You know, the idea that the subject matter expertise is more applicable than your particular technological expertise.” In this model, rules applying to law enforcement, for instance, will also apply to law enforcement using AI.

Kortz says that “at its best, the strong state government model in the U.S. allows states to be sort of laboratories of legal innovation where they can try stuff, they can be faster, they can be more responsive, and they can test out models that eventually then move to the federal level.” He notes that there is also a social advantage to a sectoral approach. “I think the idea that the emergence of AI needs to be met with an entirely new set of laws gives a lot of power to the AI developers. It’s aligned with their message that AI is completely transformative and it is not subject to governance under existing laws. And I do think it’s useful to push back on that a bit and say, like, whoa, just because we don’t have a federal AI act yet in the U.S. does not mean that you can do whatever you want, right. We still have plenty of existing laws. They apply very well.”

Existing laws bring their own complexities, which, in the case of generative AI in particular, include overlapping intellectual property laws, consumer protections, and other state-specific legislation. Kortz mentions data protection laws, which many states are still working out, as a potential accelerator or launching pad for AI regulation.

Africa, Brazil search for the right formula for AI regulation

The same could be true in Africa. Ridwan Oloyede, assistant director for the professional development workflow at Certa Foundation’s Center for Law and Innovation, says views on AI legislation for various African nations span sectorial and national strategies (he mentions Kenya, Nigeria, and Zimbabwe as nations in which politicians have been vocal on AI laws). Many are still pursuing data privacy and protection laws – which, again, could be tweaked to address AI.

The various regions’ positions on AI are reflective of their larger concerns and objectives. The EU hopes to lead in unity toward a sound and responsible legal framework. The U.S. is happier to let the states cobble together laws that suit their needs, and possibly to build a federal law from component parts that have been field-tested on a state or sectoral level. Africa stresses the importance of being part of the conversation, and not being subject to European whims or frameworks for African laws.

And in Brazil, says De Souza, another challenge has reared its head: the old nugget about how, by trying to please everyone, you often please no one. Brazil launched its national strategy on AI during the pandemic. De Souza says “there was a lot of response from academia and civil society in the public consultation to design this national strategy,” but the results ended up leaving a good chunk of academia and civil society disappointed in how their input was reflected.

AI laws is not much good unless enforced

Ultimately, AI regulation is a moot point unless someone enforces it. “Who’s going to supervise, investigate and make sure those provisions are going to be complied with by the private sector?” asks Mazzini. “We will need a certain bureaucracy to make sure the law is deployed as we want it to be.” Rights and liability will be a major part of future conversations on whether global governance of AI is possible. Mazzini admits that the AI Act is just a start, and that there is much to do on implementing its horizontal regulations, which takes both money and knowledge.

If a consensus emerges from the various opinions, it is that some aspects of AI are definitely bad or risky enough (e.g. potential discrimination and bias, generative AI) that most everyone should be able to agree they need regulating. How to go about doing so remains a puzzle to be solved.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News


EU AI Act should revise its risk-based approach: Report

Another voice has joined the chorus criticizing the European Union’s Artificial Intelligence Act, this time arguing that important provisions of…


Swiss e-ID resists rushing trust infrastructure

Switzerland is debating on how to proceed with the technical implementation of its national digital identity as the 2026 deadline…


Former Jumio exec joins digital ID web 3.0 project

Move over Worldcoin, there’s a new kid on the block vying for the attention of the digital identity industry and…


DHS audit urges upgrade of biometric vetting for noncitizens and asylum seekers

A recent audit by the DHS Office of Inspector General (OIG) has called for the Department of Homeland Security (DHS)…


Researchers spotlight Russia’s opaque facial recognition surveillance system

In recent years, Russia has been attracting attention for its use of facial recognition surveillance to track down protestors, opposition…


Estonia digital identity wallet app from Cybernetica lifts off

Tallinn-based Cybernetica has submitted the minimum viable product (MVP) for Estonia’s national digital identity wallet to the Estonian Information System…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events