FB pixel

Grok’s image processing feature is a mass violation of biometric privacy laws

‘Biometric data processing crisis that happens to include minors as victims’
Grok’s image processing feature is a mass violation of biometric privacy laws
 

The world, or some part of it, has decided in the last month or so that generating sexualized images of children is okay, as long as it’s being done on X by a large language model named Grok. The moral implications aside, there may well be another legal hammer about to fall on Elon Musk’s chabot: namely, biometrics, and the laws that protect them.

A blog from Captain Compliance, written by Graeme Whiles, points out that, even without factoring in the creation of child sexual abuse material (CSAM), “the mass processing of biometric data without consent, at industrial scale,” is in “direct violation of established data protection frameworks.”

The EU’s GDPR, Illinois’ BIPA and other frameworks afford special category protection to biometrics. The piece argues that privacy professionals should push back against framing the matter as a child safety issue. “This is a biometric data processing crisis that happens to include minors as victims.”

Dismissing the child safety angle is a choice, considering how much legislative juice has been poured into that particular tank over the past two or three years. However, the point is not so much that nudifying pictures of women is okay, but that Grok’s model is unlawful to begin with from a privacy perspective.

“Every Grok-generated deepfake required processing someone’s facial biometric data – extracting unique biological characteristics from uploaded images without permission from the data subject,” says the piece. “One researcher tracking Grok’s output found approximately 6,700 sexually suggestive images generated per hour. That’s over 160,000 instances of biometric data processing daily, each potentially constituting a separate GDPR or CCPA violation.”

Ofcom response insufficient, but BIPA could support class action

At that rate, Grok’s violations start to pile up at an existential scale. The question is, how can the legal tools available be leveraged for meaningful enforcement? The UK has already seen X take the fastest, least effective and most profit-based option to demands from Ofcom, by limiting Grok’s image generating feature to paid accounts.

“From a privacy compliance perspective, this approach is legally insufficient,” says the piece. “Payment status doesn’t constitute informed consent for biometric data processing. The move demonstrates a fundamental misunderstanding of data protection principles. The violation isn’t who can use the tool – it’s that the tool processes biometric data of non-consenting individuals regardless of who operates it.”

Some nations have begun the legal crackdown. Malaysia and Indonesia have banned Grok. Irish leaders are hearing calls to fast track legislation criminalizing deepfakes. Nonetheless, the problem is far from solved.

A sharper legal tool could be available. Illinois’ Biometric Information Privacy Act (BIPA) requires written consent before collecting biometric identifiers, and includes a private right of action, allowing plaintiffs to sue for per-violation damages of one thousand to five thousand dollars. Should the state decide to enforce its laws, it would have no trouble pointing to enough violations to trigger a class action – which it has already done enough times to develop a reputation. (Curbed somewhat by a 2024 amendment treating repeated collection of the same biometric information as a single violation.)

The argument speaks to a strange time in the global value system. It illustrates, in part, how companies like X get away with violations on a massive scale, largely because everyone is paying attention to the absolute worst thing they’re doing. Even if Elon Musk were to shut down X’s ability to generate sexually explicit images, he’d still be breaking a bunch of laws, by taking anyone’s biometrics and processing them without consent.

Technical controls, aggressive enforcement needed

The situation raises urgent new questions for privacy professionals, and demands reframing diligence. Is it possible to obtain explicit, informed consent from every individual whose biometric data a system might process, directly or indirectly? Are technical controls in place to back up policy? Is the world ready to coordinate on simultaneous regulatory action in multiple jurisdictions with very different legal standards?

For this final problem, interoperability standards that treat biometric data processing consistently will be key. But rules only go so far. “What’s needed isn’t more laws – it’s enforcement architecture that matches AI’s operating speed.”

The author believes that for the privacy sector, the question is now, “how many more Grok-scale incidents before biometric data processing in AI systems faces presumptive prohibition unless proven compliant? Every day without robust technical controls, clear legal frameworks, and aggressive enforcement creates thousands more violations.”

“The technology moved faster than law, policy, or corporate governance. Biometric data of millions was processed without consent, generating explicit imagery that will exist in perpetuity across internet archives. No amount of post-incident response changes that fundamental privacy violation.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

MOSIP pursues democratization of digital identity with unconference conversations

A democratic vision of digital identity is central to the non-profit, open-source mandate of MOSIP. As the organization and the…

 

Liveness is king: FaceTec’s Jay Meier in conversation with Chris Burt 

It’s best, says Jay Meier, to think about identity management as a system of symbiotic systems. Which is to say,…

 

Ofcom fines Kick, threatens 4chan as OSA enforcement steadily dials up

UK regulator Ofcom has faced criticism for being too slow and lenient with its power to enforce the Online Safety…

 

Innovatrics, ROC improve rankings in NIST ELFT, rising to 2 and 3 respectively

Innovatrics is celebrating success in the latest National Institute of Standards and Technology (NIST) Evaluation of Latent Fingerprint Technologies (ELFT)…

 

Meta plans launch of facial recognition to smart glasses in ‘dynamic political environment’

Meta is reportedly planning to roll out facial recognition capabilities for its smart glasses as early as this year, taking…

 

Australia’s eSafety Commissioner stands firm in face of US demands

For a few weeks, there wasn’t much news about how U.S. Congress has demanded that Australian eSafety Commissioner Julie Inman…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events