AI deepfakes are everywhere and regulators are working out how to respond
The question of digital identity is about to get much more complicated, and much more important, as the world starts reckoning with the proliferation of deepfakes. Internationally, governments are sounding the alarm about the risks that come with generative artificial intelligence (AI) that can combine tools like face biometrics, voice synthesis and language models to create realistic avatars. As reported in Fast Company, the New York Times and elsewhere, from geopolitics to pornography, the potential for disinformation, disruption and misuse is a serious issue.
Many of us have already encountered an AI persona in online ads, training videos or tutorials. But use of the rapidly advancing algorithms is spreading. In recent months, videos have appeared purporting to show people with U.S. accents supporting a coup in Burkina Faso or celebrating Chinese geopolitical savvy. These virtual people are not perfect; their faces and voices recall the uncanny valley. But they are the first wave of technology that many anticipate will transform how we think about digital identity, biometrics and reality itself.
Can governments keep up with AI?
With the introduction of Open AI’s GPT chatbot, the possibilities for widespread use are already exploding: Deepfake news anchors spreading false information; deepfake politicians making false statements; deepfake celebrities in compromising positions — all have prompted regulations aiming to respond to the rapid proliferation of a technology that will impact all areas of society.
In India, according to the Economic Times, Prime Minister Narendra Modi is requiring social media companies to take “reasonable and practical measures” to remove deepfake images from their platforms.
“We have been warned of deepfakes in our agencies,” according to a statement issued by the Ministry of Electronics and Information Technology (MeitY). Chief compliance officers at firms including Facebook, Instagram, WhatsApp, Twitter and YouTube were told they now have 24 hours to address complaints related to deepfakes, made by individuals. (Responses to complaints that come from government agencies or via court order have 36 hours.)
MeitY also encouraged companies to implement their own safeguards against doctored content or other generative learning tools that may violate user agreements.
Bill to criminalize deepfake distribution
In the U.S. state of Minnesota, addressing the House Elections Committee, Democratic Rep. Zack Stephenson gave a speech that drove home just how sophisticated generative AI and deepfakes have become.
“With the increasing sophistication of these technologies, it’s becoming easier to create convincing fake news or propaganda that is designed to manipulate public opinion. This has serious implications for privacy, free speech and the integrity of our elections,” he said, according to a report in the Duluth News Tribune.
Ironically, though, Stephenson revealed that his remarks had, in fact, been generated by ChatGPT.
There could be legal consequences for anyone creating deepfake pornography, as he introduced a bill that would make it a gross misdemeanor to “distribute without consent altered images of a person as being naked or engaging in a sexual act when the person was not actually naked or engaging in sex,” and a felony to knowingly post pornographic deepfakes for harassment.
The law would follow similar measures in California and Texas designed to regulate the use of deepfakes.
Article Topics
AI | biometrics | ChatGPT | deepfake detection | deepfakes | fraud prevention | generative AI | regulation
Comments