AI rules: These are too hard; these are too soft. Is ‘just right’ possible?
Perhaps foreseeing a global reality dysfunction brought on by the indiscriminate use of AI, leaders of the United States and China are busily formulating laws and regulations designed to contain the algorithms.
Whether AI will be a sentient partner with humanity, or a sentient enslaver of humanity almost seems like a quaint debate right now when large language models could soon be used to undefine a common idea of reality.
China’s ruling communist party has decided that ChatGPT-like algorithms are existential threats to their continued autocratic control.
According to reporting by technology and culture publication Ars Technica, regulations in Beijing are being drafted to thwart AI being used for social mobilization – party-speak for “overthrow the government.”
The government sooner or later will lower its targeting to include social confusion. There is little difference between Chinese citizens convinced via software and media to revolt and angry citizens who do not know what reality is.
The government is likely to fall in either case. That can be taken as alarmist, but the government wants generative AI to “embody core socialist values.” It is specifically concerned that AI “incites splitting the country or undermines national unity.”
Three Chinese unicorns – SenseTime, Baidu and Alibaba – are well along the path to models at least as powerful as U.S.-based OpenAI’s ChatGPT despite existing rules and an embargo by Washington on chips required to go full-throttle with AI development.
In fact, there are reports that ChatGPT is circulating on mainland virtual private networks.
Ars Technica says that Cyberspace Administration of China regulations in the works now would likely slow Alibaba’s development of a sophisticated large language model.
The U.S. is acting, too, but in a way apparently intended to not alarm citizens or businesses.
News publisher Axios has reported that the Commerce Department is asking the public to comment on policy recommendations. Not a move that connotes urgency.
Indeed, a top Commerce official reportedly told Axios: “We really believe in the promise of AI.” That sounds more like a strongly worded letter than the words of someone who probably was in Washington for the disinformation-fueled Capitol insurrection.
A lot of options are on the table, it seems. Mandated audits in procurement standards, for instance, or giving away prizes for finding biased algorithms.
Last year, the Biden administration floated an AI bill of rights, something like the AI Act that is being written and rewritten in the European Union.
India, looking at China, the United States and the EU, see nothing but opportunities in generative AI.
Analytics India Magazine quotes a Ministry of Electronics and IT statement saying, ” The government is not considering bringing a law or regulating the growth of artificial intelligence in the country.”
The government thinks AI will “provide personalized and interactive citizen-centric services through Digital Public Platforms,” according to the statement.
Perhaps the best that can be hoped for by AI skeptics in India (and elsewhere) is a national and state campaign to standardize development that results in responsible algorithms. That is happening now.