Chinese regulators vying to be the hub of control over AI
China is again drawing appreciative commentary about its AI governance. Not a lot is being written on the topic, but it is getting serious attention.
Global think tank the Carnegie Endowment for International Peace this week published a piece advising governments and artificial intelligence companies around the world to pay attention to what the Chinese government is doing.
It is surprising to some that China is paying attention to what other governments with freer economies and politics are doing. Last fall, it endorsed draft United Nations recommendations intended to, among other things, convince signatory countries to ban AI for social scoring and mass surveillance.
Likewise, it is counterintuitive to many that autocratic Beijing, which has always proved willing to sacrifice the niceties of civil and human rights its own aggrandizement, would have anything to teach liberal democratic nations about protecting people from technology.
And, indeed, there is a flaw in the Carnegie commentary in that it soft-pedals the very strong possibility that China’s communist party can — and already has — influenced industry development.
China is developing three methods of AI governance.
One is a rules-based approach for online algorithms that is meted out by the Cyberspace Administration of China, a regulatory agency that Carnegie says has “a focus on public opinion.”
According to the commentary, the Cyberspace Administration is comparatively young, and it focuses only on certain uses of AI, yet it is a strong and influential regulator.
The agency, for example, favors requiring AI recommendation services to “give an explanation” for decisions in cases where “users believe” algorithms are deployed “in a manner creating a major influence on their rights and interests.”
The next approach is employed by a think tank — the China Academy of Information and Communications Technology — which is nestled in another significant regulator, the Industry and Information Technology ministry.
The academy is a rough equivalent to the United States’ National Institute of Standards and Technology, or NIST, which makes its case for governance here. (For comparison, a United Kingdom data ethics framework is here.) The CAICT has published a document on facial recognition applications and protections, Carnegie notes.
While the overriding motivation of AI regulation (and all government activity) in China is to make the communist party stronger, Carnegie finds that the academy’s idea of trustworthy AI is similar to the concepts popular in the United States and the European Union.
The third and “lightest” approach to AI governance is unfolding at the Ministry of Science and Technology, according to the article.
This ministry has taken an almost American position, publishing guidelines for industry and researchers to adopt, or not.
Bureaucrats behind each strategy, according to Carnegie, want to be the dominant thought leader and enforcer on this critical topic.
Efforts by other countries to regulate AI so far have in many cases focused largely to considering restrictions on relations with biometrics providers and programs in China.
Xi Jinping, the nation’s general secretary who erased the party’s self-imposed term limits for its autocratic head, has repeatedly said he wants to lead the world in AI development. Dominance here means more and steadier funding for the three organizations.
Carnegie says western democracies could do worse than closely monitoring what could be a three-pronged regulatory experiment.
But clear analogies will be hard to find. The most statist nation in Europe is still a shadow of China’s economic and political framework.
And Jinping can be expected to press his thumb on the scale behind the scenes to orchestrate a result to his liking.