FB pixel

Photos of Australian children found in AI training dataset, create deepfake risk

Human Rights Watch blasts open AI organization
Categories Biometric R&D  |  Biometrics News
Photos of Australian children found in AI training dataset, create deepfake risk
 

Personal photos of Australian children are being used to train AI through a dataset that has been built by scraping images from the internet – exposing kids to the risk of private information leaks and their images being used in pornographic deepfakes.

Biometrics researchers have been struggling with how to train algorithms to recognize children, particularly as they age, for instance for investigations of child sexual abuse material, and have turned to synthetic data to avoid potential harm to real data subjects.

The images of the children were collected without the knowledge or consent of their families and used to build the Laion-5B dataset, according to findings from human rights organization Human Rights Watch (HRW). The photos were then used by popular generative AI services such as Stability AI and Midjourney, The Guardian reports.

HRW claims that AI tools trained on the dataset were later used to create synthetic images that could be categorized as child pornography.

The dataset was created by the German nonprofit open AI organization Laion. The photos were collected from personal blogs, video and photo-sharing sites, school websites and photographers’ collections of family portraits. Some were uploaded decades before the Laion-5B dataset was created while many of them were not publicly available.

Human Rights Watch has so far found 190 photos of children from Australia but this is likely only the tip of the iceberg. The database contains 5.85 billion images and captions and the organization has only managed to review less than 0.0001 percent. Some photos were listed with the children’s names and other information, making their identities traceable.

Laion has confirmed that the dataset contained children’s photos found by Human Rights Watch and pledged to remove them. The non-profit also said that children and their guardians were responsible for removing children’s personal photos from the internet.

“LAION datasets are just a collection of links to images available on public internet. Removing links from LAION datasets DOES NOT result in removal of actual original images hosted by the responsible third parties on public internet,” the organization told The Guardian.

HRW’s children’s rights and technology researcher Hye Jung Han called on the Australian government to urgently adopt laws to protect children’s data from “AI-fueled misuse.” Australia is currently preparing to amend its Privacy Act, including drafting the Children’s Online Privacy Code.

“Generative AI is still a nascent technology, and the associated harm that children are already experiencing is not inevitable,” says Han.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

UK watchdog warns of legal risks as London police deploy LFR at protest

London’s Metropolitan Police will deploy live facial recognition (LFR) technology at a protest for the first time this weekend, prompting…

 

Age assurance debate arrives in Bangladesh

The dominos continue to fall in the game of global online safety legislation targeting social media platforms. Bangladesh is weighing…

 

Et tu, browser? Security experts ring bell over browser fingerprinting

Your web browser wants you to think it’s on your side. It’s your helpful window into the online universe, and…

 

Suprema’s BioStation 3 Max supports on-device biometric credential storage

Suprema has launched BioStation 3 Max, a biometric access control terminal that combines AI-powered facial recognition, fingerprint authentication and hardened…

 

NIST, Air Force move to sole-source biometric testing and monitoring contracts

The National Institute of Standards and Technology (NIST) and the U.S. Air Force Academy are pursuing separate sole-source contracts tied…

 

AI fraud crackdown risks locking blind users out of biometric identity systems

Government identity verification systems are increasingly locking blind and low-vision (BLV) Americans out of essential services as agencies deploy stricter…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events