FB pixel

Privacy advocates push back against Meta’s data usage for AI development

Categories Biometric R&D  |  Biometrics News
Privacy advocates push back against Meta’s data usage for AI development
 

Meta has notified millions of European users about upcoming changes to its privacy policy, set to take effect on June 26, 2024. According to Silicon, the company plans to utilize personal data, including years of posts, images, and tracking information, to develop unspecified AI technologies and share data with undefined third parties. The change has prompted backlash from privacy advocates and regulatory bodies.

Max Schrems, a privacy advocate, criticizes Meta’s approach, stating that the company aims to use any data from any source for any purpose under the guise of “AI technology.” Schrems highlights that this practice is fundamentally at odds with the General Data Protection Regulation (GDPR) requirements.

In response to Meta’s policy update, privacy organization noyb has filed complaints in 11 European countries, urging authorities to halt the implementation immediately. They argue that Meta’s plan lacks transparency and fails to provide users with adequate control over their personal data. Noyb’s complaints highlight numerous violations of GDPR, including issues with transparency, data protection principles, and the right to be forgotten.

Schrems expresses concern over Meta’s “broad and undefined use of AI technology”, noting that the company has not provided any specific details on how the data will be used. He warned that this could lead to severe privacy infringements, as Meta intends to make user data available to any third party without clear limitations.

The Irish Data Protection Commission (DPC), one of Meta’s regulators in the EU, has faced criticism for allegedly making deals with Meta that allow the company to sidestep GDPR compliance.

Given the impending deadline, noyb has requested an “urgency procedure” under Article 66 GDPR to impose a preliminary halt on Meta’s new policy. This request aims to safeguard the personal data of millions of European users and ensure compliance with data protection laws.

In a related development, Democratic Assemblymember Jacqui Irwin, a former tech insider, is taking on the tech industry with a bill that would mandate artificial intelligence developers to disclose the data used to “train” their systems, Cal Matters reports.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Opinions on UK Online Safety Act emphasize importance of enforcement

Online safety legislation is making headlines around the world. But in places where laws have taken effect, are they proving…

 

UK Home Office raises estimate for passport contract to 12 years, £576M

The UK Home Office has opened a third round of market engagement for its next major passport manufacturing and personalization…

 

US lawmakers move to restrict AI chatbots used by kids

A bipartisan pair of House and Senate bills would impose new federal restrictions on AI chatbots, including a ban on…

 

Utah age assurance law for VPN users takes effect this week

Privacy advocates and virtual private network (VPN) providers are up in arms over Utah’s Senate Bill 73 (SB 73), “Online…

 

CLR Labs wins ISO 17025 accreditation for biometrics testing across EU

Cabinet Louis Reynaud (CLR Labs) has been accredited for ISO/IEC 17025, the international standard for testing and calibration laboratories, in…

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events