Europe formalizes concerns about GenAI-enabled nonconsensual deepfakes

Europe is increasingly concerned about nonconsensual deepfakes. Spain is calling for an investigation into Meta, X and TikTok for allegedly distributing AI-generated child pornography and digital sexual violence. UK Prime Minister Keir Starmer has pledged to implement new powers to crack down on the addictive design features of social media, and says “if that means a fight with the big social media companies, then bring it on.” And Irish regulators have called for deepfake legislation to be fast-tracked in the wake of the mass distribution of sexualized images of children by X’s AI chabot, Grok.
Now, in response to “serious concerns about artificial intelligence (AI) systems that generate realistic images and videos depicting identifiable individuals without their knowledge and consent,” the European Data Protection Board (EDPB) has signed a “Joint Statement on AI-Generated Imagery and the Protection of Privacy,” co-ordinated by the Global Privacy Assembly’s (GPA) International Enforcement Cooperation Working Group (IEWG) and representing the position of 61 authorities globally.
The statement says that “while AI can bring meaningful benefits for individuals and society, recent developments – particularly AI image and video generation integrated into widely accessible social media platforms – have enabled the creation of non-consensual intimate imagery, defamatory depictions, and other harmful content featuring real individuals.” Of particular concern is potential harm to children, such as cyber-bullying or exploitation.
“The co-signatories remind all organisations developing and using AI content generation systems that such systems must be developed and used in accordance with applicable legal frameworks, including data protection and privacy rules. We also highlight that the creation of non-consensual intimate imagery can constitute a criminal offence in many jurisdictions.”
X marks the festering sore
This last proviso – and indeed the whole statement – feels aimed in the direction of X, which brought the issue of nonconsensual AI-generated imagery mainstream when its large language model chatbot, Grok, began spewing out exploitative nude deepfakes at scale. The issue, however, has apparently failed to spur X’s owner, Elon Musk, into action; last week, X’s owner, Elon Musk, launched the latest update to Grok, which, according to an investigation by Dutch newspaper AD, “seems to have removed any kind of restraint or limitation.”
The statement calls on platforms to implement “robust safeguards to prevent the misuse of personal information and generation of non-consensual intimate imagery and other harmful materials, particularly where children are depicted;” to ensure “meaningful transparency” about what AI is capable of, what’s acceptable and what safeguards are in place; and to provide an effective, accessible and fast process for requesting the takedown of nonconsensual images.
Given the specific focus on children, it also recommends providing clear, age-appropriate information to children, parents, guardians and educators.
“The harms arising from non-consensual generation of intimate, defamatory, or otherwise harmful content depicting real individuals are significant and call for urgent regulatory attention,” the statement says. “This reflects our shared commitment and joint effort in addressing a global risk.”
The collective may want to start by addressing the new Grok: on testing the 4.2 update, AD investigators “created a video of the Tweede Kamer, the lower house of the Dutch parliament, doing the Nazi salute and changing ‘Mein Führer’ within seconds.”
Article Topics
AI fraud | chatbots | data protection | deepfake detection | deepfakes | Europe | generative AI | Global Privacy Assembly | Grok | regulation



Comments