Instagram use of Yoti facial age verification spreads to India, Brazil
Instagram’s use of tests to attempt to verify age in certain cases, including the use of biometric facial analysis provided by British firm Yoti, has spread from the U.S. to additional countries including India and Brazil. But the recently-introduced “Social Vouching” approach has been removed as an age verification option “to make some improvements,” according to a statement.
Back in June, the initial Instagram pilot was introduced in the U.S. It applied – and presumably still does – to users who attempt to edit their date of birth to take them from being under the age of 18 to over 18. This would trigger the need to go undergo facial analysis via Yoti, having their ages confirmed by three adult contacts on the platform or uploading identity documents.
The latest update from Instagram removes that vouching mechanism without explaining why, other than to improve it. It also does not clarify what ages trigger the age verification in other countries or even what the countries are beyond India, Brazil and the U.S. The statement says that Instagram hopes to bring the service to the UK and European Union by the end of the year.
Yoti has published a new set of FAQs on how its age estimation technology and service work as well as use cases where it is deployed.
Indian civil society members point out that it is rare in India for children to have access to their own devices for using the internet and ask parents to borrow theirs. The onus of consent has lain with parents.
A recent study by the UK telecoms regulator Ofcom found that 32 percent of British children from eight to 17 lie to create an adult account, and 47 percent of eight-to-fifteen-year-olds have user ages of 16 and over and that parents help their children circumnavigate some restrictions.
Timeline for platform response in UK
A UK coroner has written to tech platforms including Instagram owner Meta identifying concerns in how they manage children’s use of their products and listing action points, reports the BBC.
Andrew Walker determined that schoolgirl Molly Russell who ended her life aged 14 in 2017 after viewing suicide and self-harm content, had been suffering the “negative effects of online content.”
Walker said when giving his conclusion that the images Molly saw “shouldn’t have been available for a child to see.”
Writing to the platforms, Walker called for separate platforms for adults and children; age verification for joining a platform; provision of age specific content; a review of the use of algorithms to provide content; a government review of the use of advertising and parental control, including access to material viewed by a child, and retention of material viewed by a child, reports the BBC.
Platform owners Meta, Pinterest, Twitter and Snapchat were given 56 days to respond with what action they would take or an explanation of why they feel they do not need to take action.
The UK is making an impact on child protection beyond its shores. Its Age-Appropriate Design Code has influenced legislation elsewhere such as California, and its Online Safety Bill is staggering on while becoming more focused on children’s online welfare.
access management | age verification | biometrics | children | facial analysis | Instagram | Meta | regulation | research and development | social media | Yoti