FB pixel

New Reality Defender Ethics Committee not mere theater, says CEO

Trifecta of founding members brings experience from Google, Match Group, Yale
Categories Biometric R&D  |  Biometrics News  |  Trade Notes
New Reality Defender Ethics Committee not mere theater, says CEO
 

“Most ethics committees are theater. This is not one of those.” So begins a new post from Reality Defender CEO Ben Coleman, announcing the formation of the Reality Defender Ethics Committee.

Three founding members make up the committee: Keith Enright, Luciano Floridi, and Yoel Roth. Colman lists their bona fides: “Keith was Google’s chief privacy officer for over a decade and now leads strategy at Harvey. Luciano runs Yale’s Digital Ethics Center and helped architect the EU AI Act’s ethical framework. Yoel led trust and safety at Twitter and now runs trust and safety at Match Group. Three careers, all spent asking harder questions about technology, privacy, and accountability.”

Verifier’s power comes with great responsibility

Coleman’s post gets high-level with its take on so-called “epistemic authority,” or the power to confirm knowledge. Floridi calls this “verifier’s power” – and says deepfake detection companies are accumulating it without most people realizing what it means.

As detection becomes infrastructure, Colman says, the organizations that certify what’s real are deciding, in contested cases, what counts as likely authentic. “That’s a lot of power which needs oversight. If we don’t build that oversight ourselves, regulators will eventually build it for us, and they’ll build it badly.”

Colman promises that the committee exists to push back on potential overreach, operational questions, “how we communicate uncertainty in a verdict, how we handle false positives at scale, and who has access to flagged content (and for how long).”

The power of the verifier comes with complex questions, few of which have clean answers. “Some of them never will,” says Coleman. “The point of the committee isn’t to produce cleaner answers, but to make sure the answers we ship were actually wrestled with.”

Synthetic media is an ever-present risk with real, tangible costs. The companies building detection have a responsibility to take that seriously, and a responsibility to be held accountable for how seriously we take it. That work starts today.”

New deepfake detection dataset aims to reflect current GenAI landscape

Others continue work on the problem of how to save reality from dissolving into a gooey puddle of generative AI. A team of researchers from Microsoft, Northwestern University and nonprofit organization Witness have collaborated on “a novel dataset of AI-generated media to help build more robust detection systems,” according to an article in IEEE Spectrum.

The Microsoft-Northwestern-Witness (MNW) deepfake detection benchmark was “intentionally built using diverse samples of AI-generated media in order to reflect the current AI-generation landscape as much as possible.” It aims to include a “very diverse sample of AI-generated material from different generators to boost detectors’ applicability in real-world settings.”

Thomas Roca, a principal research scientist at Microsoft who researches security around generative AI, says that “asserting the authenticity of video, images, and audio has become crucial for society, but detection systems are not yet up to the challenge.” He believes this is partly due to how these systems are evaluated.”

Detection systems may perform well when tested against their training dataset or well-established benchmarks – but perform poorly in the real world. “AI in the lab is not AI in the wild,” Roca says.

The team plans to update the dataset every spring and fall, “to reflect the latest generator artifacts as well as tricks used to fool detection systems.”

New Hampshire has charged one man under its deepfake law

New Hampshire is taking steps to regulate deepfake media. WMUR reports on a new state law:  “a criminal defamation statute that makes the use of AI, particularly to create a deepfake, illegal and subject to criminal prosecution if it is used for the purpose of causing reputational harm.”

One person has been charged under the law: a man who allegedly created a deepfake video by altering a police officer’s voice in a police body camera video.

The story quotes Hany Farid, a digital forensics researcher at the University of California, Berkeley. “The ease with which you can make these things has just absolutely gotten obliterated,” he says. “We’re starting to see deepfakes in real time on Zoom calls, on Teams calls, on Webex calls, where you’re on a call with somebody, and it’s not a human or it’s not who you think it is.”

He says rapid advances mean that telltale artifacts from just a couple years ago have been fully ironed out: the era of six fingers is over. And there’s no going back.

“This is our new reality. And we’re going to have to start thinking about how to put some guardrails on this technology before it ends up taking us somewhere we don’t want to. This technology is being weaponized and we have got to start to get a handle on it as an electorate. Otherwise, our very democracy is at stake.”

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Will Scotland be the first nation to pass primary legislation covering live FRT?

The Scottish privacy commissioner continues to express consternation over the potential use of live facial recognition by Police Scotland. Meanwhile,…

 

France Identité app launches sandbox for iOS, proves age check privacy bona fides

France Identité, the French government’s mobile app for digital identity verification, has made its sandbox build available in iOS. Writing…

 

Digital ID success at scale hinges on tech, governance, adoption: IN Groupe

A study by French identity provider IN Groupe has established that digital identity systems succeed at scale only when countries…

 

New book makes case for DPI as fully integrated ecosystem

Digital development specialist Pedro Tavares has published a book that outlines how governments can successfully build digital states with digital…

 

Agentic AI pushes financial sector toward continuous identity

Agentic AI is forcing a rethink of identity and authentication in payments, as systems designed for human approval struggle to…

 

Mobai face biometrics, liveness selected for Norway’s public sector digital ID

Mobai has won a contract to provide face biometrics for Norway’s national digital ID, in partnership with Commfides Norge AS….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events