Tale of two platforms sees Sora flood web with deepfakes, World save the day

For those leading the charge on large language models and generative AI, it is truly the best of times and the worst of times: the age of abundance, and the age of slop. Having unleashed the dubious wisdom of ChatGPT 5.0 into the world, it has now seen the latest version of its AI video engine Sora hit a million downloads within five days, even faster than Chat got there.
According to an article from NPR, “videos made with OpenAI’s Sora app are flooding TikTok, Instagram Reels and other platforms, making people increasingly familiar – and fed up – with nearly unavoidable synthetic footage being pumped out by what amounts to an artificial intelligence slop machine.”
OpenAI, it seems, “has essentially rebranded deepfakes as a light-hearted plaything” – and the proverbial algorithm loves it. The piece suggests it’s “as if deepfakes got a publicist and a distribution deal.”
Such is the season of optimism – to be followed, inevitably, by the season of alarm.
Here they come to save the day: World to solve AI problem
A post from World, which shares an owner with OpenAI in Sam Altman, notes in stark terms that “90 percent of the content on the internet will be AI generated by 2026.” Voice cloning scams rob unwitting Newfoundland grandmas of their savings. In California, bots flood the application process for student financial aid. “Even public sentiment can’t be trusted: When Cracker Barrel changed its logo in August 2025, 44.5 percent of initial social media outrage was bot-generated.”
“When one person with a bot farm can simulate thousands of voices, democratic institutions built on one-person-one-vote crumble,” World says. “With deepfake fraud attempts up 3,000 percent in 2023, every call and video now carries doubt: is this really who I think it is?”
In other words, AI is inundating the internet, crippling our political structures, and undermining reality.
Cue the familiar meme: a man in a hot dog costume addresses a roomful of people asking who crashed a hot-dog-shaped car. “We’re all trying to find the guy who did this,” the caption goes. In Silicon Valley parlance, “create the problem, sell the solution.”
That would be World ID, World’s biometric decentralized digital identity-and-everything-else project. “As AI capabilities accelerate, the window for establishing robust human verification narrows,” says its post. “By verifying unique humanness once, individuals can interact across services knowing every other participant is genuinely human.”
AI’s great expectations being transferred to digital ID, PoP
In screenwriting, it is known as the “find the cure” trope. A character is blackmailed or coerced into doing something through the administration of slow-acting poison or illness; a cure is dangled if the victim follows orders. In this case, the poison is AI slop, and the antidote is to be found in deepfake detection and proof of personhood.
It’s not just OpenAI, either; Elon Musk recently posted on X that its chatbot, grok – most famous for having had a bitterly racist meltdown over the treatment of white people in South Africa and declaring itself MechaHitler – “will be able to analyze the video for AI signatures in the bitstream and then further research the Internet to assess origin.”
Grok agrees, posting in reply to its creator that advancing its capabilities to detect subtle AI artifacts, inconsistencies in compression or invisible generation patterns, “arms truth-seekers against fabrication floods, restoring trust in visuals.”
To once again paraphrase Dickens, “we’re all going direct to AI heaven – and we’re all going direct the other way.”
Article Topics
AI slop | biometrics | deepfake detection | deepfakes | digital ID | generative AI | Grok | OpenAI | proof of personhood | Sora | World







Comments