AI platforms flippant about stopping deepfake porn; authorities stepping in

Civitai is an online marketplace for AI-generated content, which has a ban on deepfake pornography but also sells files specifically designed to create it, according to a new paper by Stanford and Indiana University researchers.
The report, which has not yet been peer reviewed, analyzed content requests on the platform, backed by famed Silicon Valley VC firm Andreessen Horowitz. The researchers found that the majority of requests (or “bounties”) on the site are not for images, but LoRA’s (Low-Rank Adaptation’s), which can be used to make commercial AI models create content they were not trained to generate. More than half of requests observed were for sexually explicit (“NSFW”) content, and 86 percent of requests for deepfakes on Civitai involve LoRAs, the researchers say.
Civitai announced that its ban on deepfake porn depicting real people was extending to all deepfakes in May of 2025, after the research was completed. Around the same time, the company’s payment processor suspended its services in recognition of its problem with nonconsensual content. However, deepfake requests and responses remain available on the site, according to MIT Technology Review.
French prosecutors, UK regulators, 37 AGs agree: Grok must stop
French authorities have been looking into the well-documented deepfake and CSAM problems on X for over a year. The cybercrime unit of the Paris prosecutor raided the company’s offices in France on Tuesday.
Suspected legal violations by X include unlawful data collection and complicity in child pornography possession, the BBC reports. The Guardian reports Europol also participated in the raid, expanding the potential scope of its implications.
In the UK, the Information Commissioner’s Office has launched its own investigation into Grok, just as Ofcom declared its own investigation “a matter of urgency.”
The social media giant has previously claimed that investigations into its alleged criminal behavior are motivated by geopolitics or desires to prevent free speech.
But the wave of legal pushback has reached domestic shores, with 37 US state attorneys general taking action against xAI over Grok’s porn fakes, by Wired’s count. Thirty-five of them have demanded the company step up its efforts to prevent nonconsensual adult content in an open letter, and the AG’s of Florida and California say they have taken action. And more legislation at the state level to enable prosecution for those creating and sharing CSAM is coming.
The open letter was co-signed by the AG of Illinois, signaling that the state with the most stringent biometric data privacy laws is already aware that the AI model may be scraping biometric data, in violation of BIPA, to create its deepfake images.
The U.S. Senate has acknowledged the problem of images like those created on X or enabled by Civitai in passing the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2025 (the DEFIANCE Act) in mid-January.
Reality Defender CEO and Co-founder Ben Colman explains the enforcement mechanisms and privacy protections of the Act in a company blog post, along with its limitations as a reactive measure.
Article Topics
AI fraud | biometric data | Civitai | deepfakes | generative AI | Grok | legislation | regulation | social media | X (twitter)







Comments