FB pixel

Australian MP wants to follow Denmark in legislating right to own one’s likeness

Deepfake problem becoming a public policy issue as nonconsensual use grows
Australian MP wants to follow Denmark in legislating right to own one’s likeness
 

The latest groundswell of legislative activity aimed at the tech sector concerns the question of who has the right to use a person’s likeness. Whereas existing licensing arrangements might limit how a famous person’s image is used, generative AI has amplified the problem with powerful deepfake tools, and the risk has spread beyond the Hollywood elite. The problem is perhaps most bluntly illustrated in the existence of nudify apps, which generate explicit deepfakes without a subject’s consent.

The harms can be significant, particularly for young people (and particularly young women). Research from UK regulator Ofcom shows the vast majority of sexually explicit deepfakes are of women, many of whom suffer from PTSD or anxiety as a result of being targeted. Deepfakes, Ofcom says, are “already doing serious harm to ordinary individuals – whether that is by being featured in nonconsensual sexual deepfake videos or falling victim to deepfake romance scams and fraudulent adverts.” 404 Media recently reported on the proliferation of videos, generated using OpenAI’s Sora 2, that show teenage girls being strangled.

Nations are beginning to clue into the extent of the deepfake threat, and some are responding with regulations. Denmark is expected to pass a bill that would amend its copyright law to ban the sharing of deepfakes, in an attempt to protect citizens’ likeness or voice from being used nefariously. Effectively, it would offer those personal traits similar protections to those for biometrics.

Now, Australia is pursuing a similar law to limit misuse of people’s likenesses.

‘You should own your face’: Pocock pushes for likeness law

ABC news reports that independent senator David Pocock has introduced a new proposal before federal parliament, which he says “would put into legislation that your face, voice and likeness is yours.”

“To me, this seems like a bit of a no-brainer. You should own your face.”

Pocock is concerned that the government is moving too slowly to make sure laws reflect the state of reality when it comes to deepfakes. At present, he says, “unless a deepfake is sexually explicit, there’s very little that you can do as an Australian” to prosecute whoever created it.

But scams, commercial exploitation, disinformation and other misappropriations are becoming more common. Pocock calls the deepfake threat “a huge freight train that is coming at us.”

His bill proposes adding a dedicated complaints framework to the Online Safety Act, which would grant the eSafety Commissioner powers to demand deepfake removals and issue immediate fines. Individuals who share nonconsensual deepfakes face a penalty of 165,000 dollars (about 106,000 U.S.) up-front. Companies that fail to comply with a removal notice could pay as much as 825,000 dollars (about 533,000 U.S.).

Moreover, proposed changes to the Privacy Act would allow Australian citizens to bring civil lawsuits and sue perpetrators directly for financial compensation if they can prove that they suffered “emotional harm.”

Pocock says “we have to draw a line in the sand and say, this is not on – you cannot deepfake someone without their consent.”

Policymakers worry government is dragging its feet on AI laws

His colleague, independent MP Kate Chaney, is introducing a bill to criminalize the use of AI tools purpose-built to create child sexual abuse material. She says the government is “missing in action” on AI regulation.

“The US, the UK, Canada, Japan, Singapore all have an equivalent,” she says. “Australia has supported the idea of them but has not yet actually taken action on putting one forward. We can’t afford to sit around twiddling our thumbs wondering what to do about AI while it is changing so rapidly.”

The government this year reoriented its AI messaging to focus on the economic promise of the technology, and says it is not rushing into regulation.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events