Meta to tackle Australia’s deepfake problem during elections amid detection lapses

Meta says it will help deal with issues of deepfakes and other disinformation-related concerns during Australia’s federal elections in May. The tech giant is taking the engagement at a time when deepfake threats in political campaigns are on an uptick, and a when one study has shown that some of the available software to combat them are not reliable in real-world tests.
In a blog post cited by Reuters, the company that owns Facebook and Instagram says it will remove all deepfakes and other flagged false content from its platforms in Australia, in a move that aligns with its mission to combat misinformation and disinformation in time of elections.
The initiative will be implemented via its independent fact-checking program in Australia, the company says, adding that all content that is likely to cause physical violence or other forms of harm will be expunged from its social media platforms.
Additionally, the company indicates that all deepfake content that goes contrary to its policy guidelines will either be pulled down outright, or be tagged with a label that reduces visibility on feeds.
Meta’s Head of Policy in Australia, Cheryl Seeto, is quoted as saying that “for content that doesn’t violate our policies, we still believe it’s important for people to know when photorealistic content they’re seeing has been created using AI.”
The announcement from Meta on combatting deepfakes during the upcoming Australian election comes at a time when researchers are reporting major vulnerabilities with deepfake detection systems.
According to InnovationAus, a joint study which included teams from the Commonwealth Scientific and Industrial Research Organisation (CSIRO) of Australia and South Korea’s Sungkyunkwan University, showed that all software tested were ineffective in reliably detecting real-world deepfakes.
The researchers said of the 16 systems they tested, no single one proved reliable, meaning that there is “an urgent need for more adaptable and resilient solutions to detect them.”
One of the researchers and co-author of the report, Dr Sharif Abuadbba of CSIRO, said “as deepfakes grow more convincing, detection must focus on meaning and context rather than appearance alone.”
Abuadbba further suggests, as quoted, that “to keep pace with evolving deepfakes, detection models should also look to incorporate diverse datasets, synthetic data, and contextual analysis, moving beyond just images or audio.”
Already, there are fears that the proliferation of deepfake content could pose a major problem of disinformation during the upcoming federal elections in Australia.
The country’s election management body says it has launched a drive to fight disinformation on social media platforms such as TikTok, but notes that its powers do not extend to handling deepfakes.
In a virtual discussion hosted by Shadow Dragon, three personalities viz, Dutch digital security expert, Nico Dekens, American cybersecurity professional, Brye Ravettine, and American singer David Cook, exchanged views on how deepfakes and domain spoofing are contributing to the propagation of false information during periods of election.
Their exchanges, they note, touch on issues like foreign influence operations, the spread of fake news and domain spoofing, the rise of deepfakes in elections, and strategies and best practices for identifying and fighting disinformation.
Article Topics
Australia | deepfake detection | deepfakes | elections | fraud prevention | Meta | social media






Comments