FB pixel

India is struggling with deepfakes and making tech platforms pay for it

India is struggling with deepfakes and making tech platforms pay for it
 

India has a deepfake problem. On Thursday, co-founder of IT giant Infosys Narayana Murthy warned about deepfake videos of him recommending automated trading apps circulating the internet and warned people not to fall for the scam.

Murthy is not the only prominent face in the country that has seen his likeness used in synthetic media. Over the past months, deepfake videos of famous Indians have gone viral on social media, including industrialist Ratan Tata who pointed out last week that a video of him giving investment advice on Instagram is fake.

With its 800 million internet users, India is becoming the center of a deepfake epidemic with some commentators now wondering whether the technology could affect its general elections scheduled between April and May 2024. Along with Bangladesh and Pakistan, the country ranks among the top ten in the Asia-Pacific most impacted by identity fraud using deepfake technology, according to the 2023 Identity Fraud Report from digital identity verification firm Sumsub.

The Indian government is now striking back.

Regulators are fighting the deepfake challenge on several fronts, formulating regulations to punish individuals for uploading deepfake content, as well as the social media platforms hosting it.

The same day Infosys’ Murthy warned online users of its deepfake video, Rajeev Chandrasekhar, minister of state for Skill Development and Entrepreneurship and Electronics and IT, announced that the government will issue an advisory on deepfakes to all tech companies. Currently, there is no separate regulation on deepfakes, but the government will have even tighter rules if companies fail to follow the advisory, the minister said, according to Mint.

“If they still do not adhere to it, go back and amend the rules and make them even tighter in case of any ambiguity,” he said on Thursday during the Global Partnership on Artificial Intelligence (GPAI) summit in New Delhi.

The move comes after Chandrasekhar met with social media platforms last week to review the progress in tackling misinformation and deepfakes. During the meeting, the minister pointed out that some platforms have still not complied with regulations and reiterated that the Indian government is taking a “zero tolerance” approach towards misinformation and deepfakes.

Platforms are currently required to tackle harm under the IT Rules and Act and follow the Code of Criminal Procedure (CrPC) which allows prosecution for deepfakes as forgeries, according to CIO Axis.

Some companies are already heeding the call for self-regulation. Google is collaborating with the Indian government on fighting AI-generated fake content. The tech giant has also invested US$1 million in grants to the Indian Institute of Technology in Madras to establish a center for responsible AI, the Times of India reports.

But these efforts may not be enough.

Effectively addressing the deepfakes requires creating specific laws and penalties, according to Abha Shah and Nitika Nagar of the Naik Naik & Company law firm.

“It’s crucial to establish guidelines for online platforms and social media sites to detect and promptly remove deepfake content, necessitating collaboration with tech companies to enforce these regulations,” the duo writes in an article on AI-focused industry publication Mondaq.

India is now preparing to shape its response to AI dangers. On Tuesday, Indian Prime Minister Narendra Modi called for a global framework regulating the development of artificial intelligence tools and warned that deepfakes are now being abused to spread false information even by organizations, according to Wion News.

“Deepfake is a challenge for the whole world… AI tools going into the hands of terrorists are also a big threat. If terrorist organizations get AI weapons, this will have a huge impact on global security… We need to plan how to tackle this,” Modi said during the GPAI summit.

“Just like we have treaties and protocols for international affairs, we need to prepare a global framework for AI at a global level,” he added.

India plans to hold its AI Programme next month, an initiative that aims to establish the country’s future domestic AI policy. Minister Chandrasekhar said that the Indian approach to regulating AI is different from those of global leaders such as the U.S. and the EU. By the time the Korea Safety Summit starts in six months, India could gain more understanding on harmonizing regulations, he added.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

BorderAge promises 100% anonymous age assurance with hand gesture modality

Imagine a magician who waves their hands not to conjure a white rabbit, but to provide age assurance without collecting…

 

euCONSENT’s tokenized age verification set for PoC at upcoming age assurance summit

The European Union has its own ideas about how age assurance should be carried out for restricted online services, and…

 

Humanity Protocol launches Humanity Foundation ahead of ‘big moves’

Humanity Protocol, one of the emergent contenders in the market for proof of personhood (PoP), has announced the launch of…

 

J.P. Morgan adds 2 biometric authentication terminals to payments ecosystem

J.P. Morgan Payments (JPM) has announced the release of two new proprietary biometric payments terminals for retail, restaurant and entertainment…

 

Prove acquires reusable digital ID verification firm Portabl

A post on Prove’s blog says the acquisition of digital ID startup Portabl “will enable Prove to enhance its industry-leading…

 

Socure: Nation-state fraud ramping up in 2025

Socure, a leading digital identity verification platform, believes 2025 will be the breakout year for digital identity verification in the…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events