Age-verification vendors could benefit from the new gang-suit of Meta

A group of U.S. attorneys general are suing Meta Platforms and Instagram, alleging that they are winking at the general goal of protecting children from emotional harm that seems endemic to social media platforms.
Here’s a good example of what’s been filed: People of the State of California v. Meta.
Companies writing age-verification algorithms are likely to benefit most from government and parental pressure to shield children from product strategies that critics charge make social media addictive.
Meta itself is experimenting with verification via facial age estimation technology from UK firm Yoti in selected countries worldwide.
The other two potential answers to the problem of kids whose self-esteem can be jeopardized online are radical parent involvement and Meta and its competitors whipping up a new algorithm.
It’s unrealistic to expect many parents to effectively protect their children from harm on social media. That might be part of the problem, but it is reality.
And trusting Meta et. al. to create an algorithm that acts like a virtue angel, whispering health and moral imperatives into children’s ears is equally unrealistic.
Thus, the gang-suit and industry opportunity. Forty-two AGs (some of whom made a group filing while other filed suits on their own) charge Meta and Instagram with doing too little to make sure children are online without their guardians’ consent. In fact, some argue that executives know it is happening and they are taking software advantage of it.
According to an article by U.S. cable news publisher CNBC, the attorneys general accuse Meta, for instance, of writing code that alerts children in barrages that compel at least some kids to log on. It uses an infinite scroll function that does not give subscribers (including adults) a prompt for a mental break.
The question is, can age verification be made bulletproof, in terms of forming an insurmountable barrier. Will software makers get some liability relief? There was something shy of 20 million minors in the U.S. in 2018.
Time and deviltry, two factors no one can control in a child’s head, often result in problems online.
And can age verification, which typically involves biometric identification (usually facial recognition) be done in a way that will not compromise children’s identities. Or, for that matter, adults’ identities because any solution is likely to involve a guardian looking into a camera and saying it’s OK for a child or child-appearing person to subscribe to social media.
Article Topics
age verification | biometrics | children | lawsuits | Meta | social media | United States

Comments