FB pixel

Parental consent, age ranges are flaws in India’s data privacy act, say experts

Gaps in DPDPA all that’s stopping huge market from adopting Big Tech models
Categories Age Assurance  |  Biometrics News
Parental consent, age ranges are flaws in India’s data privacy act, say experts
 

A common complaint from opponents of age assurance technology is that parents should be the ones to decide what their kids can and can’t see online. For years, parents could dictate which videos kids rented, which TV shows they watched and other aspects of their exposure to media. Now, however, all the shows are on 24/7, and they live in your pocket. That makes parental consent a bit more complicated.

An opinion piece in Tech Policy declares that “parental consent is a conundrum for online child safety,” and interrogates the idea in the context of India’s data privacy law, the Digital Personal Data Protection Act (DPDPA).

“The DPDP Act defines children as individuals under 18 years of age, and specifies that their data can only be processed with the consent of parents or guardians,” say the authors of the piece. “The Act also prohibits data processing that has detrimental effects on the well-being of children, and bans tracking, behavioral monitoring and the use of targeted ads on children.”

Those measures, the Draft DPDP Rules, make India’s law stricter than ones in the UK or Europe, where age thresholds are lower. However, they have not been implemented, as the government has yet to publish the final versions.

The DPDPA and the pending draft rules do not identify any single, mandatory age verification method, but instead broadly require businesses to implement “appropriate technical and organisational measures” to obtain parental consent. It’s here that the authors focus their argument, and on the knowledge gap that can exist between generations – the “flawed assumption that parents possess the maturity, experience, and technical knowledge to make decisions in their child’s best interest.”

Sometimes parents just don’t understand

“For example, generational disconnects may leave parents unaware of online harms that occur on or outside of popular social media services, such as exposure to sexual content and grooming through popular games like Roblox and Minecraft,” they write.

As evidence, they cite a 2022 Oxfam report which found that only 38 percent of Indian households are digitally literate. “In many cases, children and young adults are the first in the family to access popular internet services, and parental consent and supervision are often absent,” they write. “It is often children who introduce their parents and elders to digital platforms. In such scenarios, how meaningful can parental or adult consent truly be?”

The paper dismisses identity verification and age assurance tools, arguing they present the risk of excessive and unintended data collection. “Particularly if parental identity is to be established using government IDs, how can principles of purpose limitation and data minimization be ensured?”

But self-declaration of age is obviously not a solution. As such, the authors propose a system rooted in parental engagement, “encouraging a balance between support, autonomy and awareness of digital harms.” The solution can’t mean shutting kids out of the internet, and it can’t mean that it’s all parents’ problem.

“Instead, policies and legal frameworks should embed safeguards like privacy by design and safety by design, not only in age verification systems, but in the design of digital services themselves. This is where regulation has a key role in enabling safer online spaces for children.”

Need for nuanced approach to age gates

Another academic opinion piece, this one published in Medianama, asks similar questions, but through a different lens: “does a blanket ban account for children’s diverse needs, awareness levels and benefits of digital engagement? Should a teenager, who understands digital trade-offs, be treated the same as a child still learning to navigate the digital world?”

The author argues that personalization and tailored content, which are often the target of regulators, can actually be good for kids. “Leveraged responsibly, personalization can enhance children’s digital experience, inclusivity, and access. A 10-year-old tween with disability can access relevant study material more easily, while a 17-year-old teenager in distress might find supportive online spaces.”

It’s a somewhat rickety argument, summarized as follows: “an outright ban on behavioural monitoring risks throwing the baby out with the bathwater. A depersonalized internet experience would resemble traditional media, lacking the tailored content. Children may encounter irrelevant or inappropriate material, potentially exposing children to harmful material.”

The notion that not regulating targeted content puts kids at risk is nonsense, and the assertion that traditional media lacking “tailored content” is objectively worse for kids sounds like a page out of the Facebook, so to speak.

Yet the piece seems genuine in its position supporting a risk-based framework and “a balanced ecosystem – where personalisation provides benefits while ensuring safe online space.” Any parent can confirm that a 7-year-old and a 12-year-old have vastly different levels of maturity and understanding, so “an effective age-appropriate design code” that “reflects varying digital maturity levels and needs of children” makes sense.

The author proposes the following: “Younger children under eight require stringent protections, including parental oversight. Tweens (ages 9-12) should have simplified explanations of data use, with privacy settings that allow parental guidance. Teenagers (ages 13-17) should have greater control over their privacy settings, with clear disclosures enabling informed digital engagement, discretion and agency.”

“A one-size-fits-all approach can potentially limit the digital ecosystem’s ability to serve children effectively. A nuanced regulatory approach could address aforementioned challenges while retaining the benefits of personalization.”

Google, Meta waiting to see where DPDPA lands on credential providers

India is also part of the arena in which Meta and Google are facing off over where in the tech stack to put age verification measures. The Economic Times reports that a new skirmish has erupted in the wake of a blog post by Kate Charlet, Google’s global director of privacy safety and security policy, outlining Google’s zero knowledge-based age assurance strategy “for Europe and beyond.”

Google and Meta each have more than a billion users in India, making it the largest market for both firms.

Meanwhile, the DPDPA has yet to clarify who is eligible to act as an authorized digital credential provider.”

The Times quotes Saumya Brajmohan, partner at law firm Solomon & Co., who believes “this absence of clear rules around acceptable verification methods and trusted companies is what currently prevents Indian platforms from adopting models like Google’s without ambiguity.”

The India Stack infrastructure could theoretically serve as a credentialing foundation, but “concerns around data security and overreach continue to limit their adoption for this specific use case.”

 

Age verification in India – Technology, Policy, and the Digital Personal Data Protection Act

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Bahrain financial services deploy biometric KYC and IDV with new eKey for business

Bahraini financial service companies are integrating Bahrain’s national biometric digital identity and Know Your Customer (KYC) service for businesses, EKEY-B,…

 

Australian age assurance law prompts removal of 4.7M underage accounts

Australian regulators have released initial results from the country’s social media age restriction, showing that major social media companies removed…

 

Thailand introduces face biometrics verification to fight health sector fraud

The government of Thailand is adding facial scans to the patient verification process within the framework of the country’s Universal…

 

KYA emerges as essential tool to ensure agentic AI is trustworthy

It’s 2026; do you know who your agents are? This is the question of the moment, as the agentic AI…

 

UNHCR lauds role of Fayda digital ID in facilitating life for Ethiopia refugees

Thanks to the Fayda digital ID, access to services for refugees hosted by Ethiopia has become much easier, a development…

 

A New Year’s resolution for AI – don’t blame the bot

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner According to the old saying, blaming our tools is a…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events