FB pixel

OpenAI puts more guardrails on celebrity likenesses after Cranston leads pushback

YouTube also worried about celebrity deepfakes
OpenAI puts more guardrails on celebrity likenesses after Cranston leads pushback
 

The daughter of beloved comedian Robin Williams recently issued a public plea to fans: please stop sending AI deepfakes of her late father. “You’re not making art,” she wrote on Instagram. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”

With the release of OpenAI’s Sora 2 into the world, the ability to easily generate convincing fake video went mainstream – and quickly went sour. A report from NPR describes hyper-realistic deepfake videos of assassinated civil rights leader Martin Luther King stealing from a grocery store, alongside videos featuring Princess Diana, Kurt Cobain, Tupac Shakur, Michael Jackson and Malcolm X (all deceased).

In response to concerns from public figures over unauthorized use of their likenesses, OpenAI has issued a joint statement with Hollywood insiders, announcing “strengthened guardrails around replication of voice and likeness when individuals do not opt-in.” That means all artists, performers, and individuals will have the right to determine how and whether they can be simulated.

OpenAI says the new framework aligns with the principles embodied in the NO FAKES Act, pending federal legislation designed to protect performers and the public from unauthorized digital replication.

A statement from SAG-AFTRA, the actors’ labor union, says that Breaking Bad actor Bryan Cranston prompted the change when he contacted SAG-AFTRA about unauthorized generative iterations of his face. “I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way,” Cranston says. “I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness.”

SAG-AFTRA President Sean Astin, who played hobbit Samwise Gamgee, says Cranston is just one of “countless performers whose voice and likeness are in danger of massive misappropriation by replication technology.”

“I’m glad that OpenAI has committed to using an opt-in protocol, where all artists have the ability to choose whether they wish to participate in the exploitation of their voice and likeness using A.I. Simply put, opt-in protocols are the only way to do business and the NO FAKES Act will make us safer.

OpenAI’s guardrails also cover the videos of Martin Luther King, meaning his likeness will be blocked. But it maintains there are “strong free speech interests” in allowing users to make AI deepfakes of historical figures, if estates are in agreement.

OpenAI CEO Sam Altman says the company is “deeply committed to protecting performers from the misappropriation of their voice and likeness” and will “always stand behind the rights of performers.”

However, at least some of the children of public figures and performers would likely support the simpler suggestion made on social media by Bernice King, MLK’s daughter: “Please stop.”

YouTube adds likeness reporting feature

YouTube is also worried about celebrity deepfakes, and is beginning to expand its likeness detection feature to all creators in the YouTube Partner Program. A video explainer says that “if you see a video that uses your likeness without your permission, you will be able to send a removal request for review under YouTube’s privacy guidelines.”

The feature is currently only able to detect generative AI or synthetic visuals; voice manipulation is not covered. To participate, users must submit a government ID and brief video selfie to YouTube for identity verification, and “give the feature source material to draw from in its review.”

For celebrities, the issue of likeness detection may be a matter of endorsements and stolen income. But the issue of likeness appropriation is also a major concern for young people, and especially young women. A 2019 report estimated that 96 percent of all deepfakes online are pornographic, and 99 percent involve women who did not consent to their likeness being used. Since then, the number of deepfakes has exploded, and there is little reason to doubt that the proportions remain stable. (Any shift would have to factor in the parallel explosion in impersonation scams.)

Celebrities, however, can’t be blamed for worrying: many are just successful creative workers, and their livelihoods are as threatened by generative AI as the independent composer, the novelist and the animator. Some of the hype around Tilly Norwood, an AI-generated character marketed as an actor for hire, was manufactured – but some was born of real fear.

A statement from the U.S. Motion Picture Association says that, since Sora 2’s release, “videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service and across social media.” The global audience has shown its appetite for endless new iterations of the same stories. With all the actors that have already won the public’s hearts over the decades, why not just choose from the established pool? Why hire Jenna Ortega for your project when you can just generate young Audrey Hepburn? In a deepfaked world, every image is replaceable.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Ambitious biometrics projects need clear roles for success

Biometrics technology development has long been the fixed domain of experts, and while public bodies like NIST have played a…

 

Who holds the keys to digital sovereignty? It might not be who you think

As governments think more about digital identity as a pillar of digital public infrastructure, and therefore a matter of vital…

 

Nigeria wades into social media age assurance debate with pubic survey

A survey has been released by the Nigerian Data Protection Commission to gather feedback on the proposed regulation of a…

 

Spain’s Digital Transformation Ministry backs Sybol with €500k

A Spanish digital transformation agency is helping to fund digital identity development and verifiable credentials. The Spanish Society for Technological…

 

Ethiopia’s digital ID joins sovereign wealth fund as weekly enrollments reach 1M

Ethiopia is accelerating its efforts to reach 90 million digital ID enrollments this year, with the National ID Program (NIDP)…

 

Vendors push deeper into high assurance identity verification

Digital identity vendors are accelerating product integrations as businesses look for stronger, more seamless ways to verify users across sectors….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events