FB pixel

AI deepfakes are everywhere and regulators are working out how to respond

Categories Biometric R&D  |  Biometrics News
AI deepfakes are everywhere and regulators are working out how to respond
 

The question of digital identity is about to get much more complicated, and much more important, as the world starts reckoning with the proliferation of deepfakes. Internationally, governments are sounding the alarm about the risks that come with generative artificial intelligence (AI) that can combine tools like face biometrics, voice synthesis and language models to create realistic avatars. As reported in Fast Company, the New York Times and elsewhere, from geopolitics to pornography, the potential for disinformation, disruption and misuse is a serious issue.

Many of us have already encountered an AI persona in online ads, training videos or tutorials. But use of the rapidly advancing algorithms is spreading. In recent months, videos have appeared purporting to show people with U.S. accents supporting a coup in Burkina Faso or celebrating Chinese geopolitical savvy. These virtual people are not perfect; their faces and voices recall the uncanny valley. But they are the first wave of technology that many anticipate will transform how we think about digital identity, biometrics and reality itself.

Can governments keep up with AI?

With the introduction of Open AI’s GPT chatbot, the possibilities for widespread use are already exploding: Deepfake news anchors spreading false information; deepfake politicians making false statements; deepfake celebrities in compromising positions — all have prompted regulations aiming to respond to the rapid proliferation of a technology that will impact all areas of society.

In India, according to the Economic Times, Prime Minister Narendra Modi is requiring social media companies to take “reasonable and practical measures” to remove deepfake images from their platforms.

“We have been warned of deepfakes in our agencies,” according to a statement issued by the Ministry of Electronics and Information Technology (MeitY). Chief compliance officers at firms including Facebook, Instagram, WhatsApp, Twitter and YouTube were told they now have 24 hours to address complaints related to deepfakes, made by individuals. (Responses to complaints that come from government agencies or via court order have 36 hours.)

MeitY also encouraged companies to implement their own safeguards against doctored content or other generative learning tools that may violate user agreements.

Bill to criminalize deepfake distribution

In the U.S. state of Minnesota, addressing the House Elections Committee, Democratic Rep. Zack Stephenson gave a speech that drove home just how sophisticated generative AI and deepfakes have become.

“With the increasing sophistication of these technologies, it’s becoming easier to create convincing fake news or propaganda that is designed to manipulate public opinion. This has serious implications for privacy, free speech and the integrity of our elections,” he said, according to a report in the Duluth News Tribune.

Ironically, though, Stephenson revealed that his remarks had, in fact, been generated by ChatGPT.

There could be legal consequences for anyone creating deepfake pornography, as he introduced a bill that would make it a gross misdemeanor to “distribute without consent altered images of a person as being naked or engaging in a sexual act when the person was not actually naked or engaging in sex,” and a felony to knowingly post pornographic deepfakes for harassment.

The law would follow similar measures in California and Texas designed to regulate the use of deepfakes.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

U.S. academic institutions get biometric upgrades with new partnerships

A press release says ROC (formerly Rank One Computing), which provides U.S.-made biometrics and computer vision for military, law enforcement…

 

Smart Bangladesh 2041: Balancing ambition with reality

Bangladesh aims to be a “Smart” nation by 2041 as the country goes through a drastic transformation founded on digital identity…

 

Nigeria’s NIMC introducing one multi-purpose digital ID card, not three

The National Identity Management Commission of Nigeria (NIMC) has clarified that only one new digital ID card with multiple functions…

 

Age assurance tech is ready now, and international standards are on their way

The Global Age Assurance Standards Summit has wrapped up, culminating in a set of assertions, a seven-point call-to-action and four…

 

NIST finds biometric age estimation effective in first benchmark, coming soon

The U.S. National Institute of Standards and Technology presented a preview of its assessment of facial age estimation with selfie…

 

Maryland bill on police use of facial recognition is ‘strongest law in the nation’

Maryland has passed one of the more stringent laws governing the use of facial recognition technology by law enforcement in…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events