China and the EU regulate AI, US speculates
While the European Union is playing the long game in drafting regulation for AI, China has been surprising many with quick yet profound regulations. A report compares the situation for both jurisdictions. On a different track, the U.S. government looks at ways to improve its AI research infrastructure.
The European Union has a long-standing reputation for regulating many facets of life. Its GDPR has been something of a global hit. With its upcoming AI Act it attempts to safeguard human rights and society generally. China has also brought in regulation on AI, with something of a scrum between regulators as to who controls it.
A report for CNBC compares the approaches to ask whether one could become the dominant version of regulation, albeit with the EU’s project much broader (and slower) than China’s.
The report looks at whether China’s requirement that firms must inform users if an algorithm is in use to push certain information to them and give them the choice to opt out is for the public’s interest or government’s. Or perhaps it is simply a large-scale experiment the rest of the world can learn from.
In some ways the technical objectives of the EU and China are similar, and the West should pay attention to China’s moves, according to a commentator. A notable difference is China’s willingness to test novel approaches directly on the public.
Some commentators foresee a divide in approaches to AI development, and particularly in its policing. Firms may have to adapt their products to comply with local regulation, something they are already good at, a commentator tells CNBC.
First steps towards federal AI research cross-pollination for US
The U.S. National AI Initiative Act of 2020 became law in 2021 and brings together AI research across Federal government to accelerate processes for economic and security gains. As part of the National Artificial Intelligence Initiative, the law established a team to create the roadmap for shared research infrastructure, the National Artificial Intelligence Research Resource (NAIRR) task force in June 2021.
It has published its first assessment of the situation via public meetings and expert consultations. ‘Envisioning a National Artificial Intelligence Research Resource (NAIRR): Preliminary Findings and Recommendations’ establishes a landscape where access to AI development resources is too limited to large firms and rich universities.
“The strategic objective for establishing a NAIRR is to strengthen and democratize the U.S. AI innovation ecosystem in a way that protects privacy, civil rights, and civil liberties,” states the report. It also calls for the NAIRR’s day-to-day operations to be independent of government, for it to sets standards for responsible AI research, be accessible and resource elements, including testbeds, “be accessible in user-friendly ways.”
Testbeds are where the NAIRR crosses with biometrics, as examples of driving research in specific areas. It gives NIST and its Face Recognition Vendor Test as an example for testing biometrics.
To protect NAIRR, it calls for using zero trust architecture with strong identity and access controls, although this is in line with the move for all Federal agencies to adopt the technology.
However, some think the U.S. has already fallen behind China, with other groups such as AI Now also calling for a democratization of AI research.