FB pixel

Privacy advocates push back against Meta’s data usage for AI development

Categories Biometric R&D  |  Biometrics News
Privacy advocates push back against Meta’s data usage for AI development
 

Meta has notified millions of European users about upcoming changes to its privacy policy, set to take effect on June 26, 2024. According to Silicon, the company plans to utilize personal data, including years of posts, images, and tracking information, to develop unspecified AI technologies and share data with undefined third parties. The change has prompted backlash from privacy advocates and regulatory bodies.

Max Schrems, a privacy advocate, criticizes Meta’s approach, stating that the company aims to use any data from any source for any purpose under the guise of “AI technology.” Schrems highlights that this practice is fundamentally at odds with the General Data Protection Regulation (GDPR) requirements.

In response to Meta’s policy update, privacy organization noyb has filed complaints in 11 European countries, urging authorities to halt the implementation immediately. They argue that Meta’s plan lacks transparency and fails to provide users with adequate control over their personal data. Noyb’s complaints highlight numerous violations of GDPR, including issues with transparency, data protection principles, and the right to be forgotten.

Schrems expresses concern over Meta’s “broad and undefined use of AI technology”, noting that the company has not provided any specific details on how the data will be used. He warned that this could lead to severe privacy infringements, as Meta intends to make user data available to any third party without clear limitations.

The Irish Data Protection Commission (DPC), one of Meta’s regulators in the EU, has faced criticism for allegedly making deals with Meta that allow the company to sidestep GDPR compliance.

Given the impending deadline, noyb has requested an “urgency procedure” under Article 66 GDPR to impose a preliminary halt on Meta’s new policy. This request aims to safeguard the personal data of millions of European users and ensure compliance with data protection laws.

In a related development, Democratic Assemblymember Jacqui Irwin, a former tech insider, is taking on the tech industry with a bill that would mandate artificial intelligence developers to disclose the data used to “train” their systems, Cal Matters reports.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

CMU, MSC open hackathon series to increase digital ID use cases in Africa

Students from universities across Eastern Africa have been invited to submit ideas in the first of a series of planned…

 

Groups reject expiry date for digital ID cards in Kenya as govt defends move

Some civil society organizations in Kenya say they want an explanation from the government with regard to the institution of…

 

Idemia forensic software extracts human faces, tattoos for investigative leads

Even when a facial recognition system is integrated within a state or federal investigative agency, human intervention is necessary. In…

 

Nearly three quarters of U.S. adults worry deepfakes could sway election: Jumio

The hour is ripe for political deepfakes. The U.S. presidential elections are still four months away, and the campaign has…

 

Controversial US privacy bill rewritten again, but path still unclear

The already controversial American Privacy Rights Act of 2024 (APRA), which was originally introduced in April by U.S. Senate Commerce…

 

Selective disclosure and zero-knowledge proofs: Examining the latest revision of ETSI TR 119 476

By Sebastian Elfors, Senior Architect at IDnow In July 2024, the European Telecommunications Standards Institute (ETSI) published an updated revision of…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events