Global efforts to combat deepfakes intensify with new laws, biometric tools
With rapid technological advances, the fight against deepfakes has taken on greater urgency worldwide. From academia to governments and businesses, numerous stakeholders are stepping up efforts to detect, regulate, and protect against these digital threats.
One notable development comes from the University at Buffalo, which announced its Deepfake-O-Meter, a tool designed to democratize deepfake detection. According to a report published by the University of Buffalo, the tool is aimed at improving public access to deepfake identification, allowing users to detect manipulated videos or images, with the goal of empowering individuals to safeguard themselves against disinformation, which is increasingly facilitated by AI-generated media.
Elsewhere, a team of scholars from Hong Kong and Macau recently won a global deepfake detection challenge across a variety of different image types and scenarios, providing a positive light amid a proliferation of faked and manipulated content online.
The business world is also addressing the rise of deepfakes. Companies like authID have responded to the threat with comprehensive white papers. In a recently published report, the company highlights how deepfakes pose significant risks to enterprises, particularly in financial services. The paper outlines strategies to protect businesses against impersonation attacks and emphasizes the importance of integrating biometric security tools as a defense.
Growing impact of deepfakes across various sectors
Meanwhile, Singapore is taking decisive action against the potential misuse of deepfakes in elections. In a proactive move, the Singaporean government is moving forward with legislation that would ban deepfakes targeting election candidates. This regulation is part of a broader 2qeffort to preserve the integrity of the political process by preventing the spread of false information during election periods.
On September 9, 2024, the Ministry of Digital Development and Information (MDDI) introduced the Elections (Integrity of Online Advertising) (Amendment) Bill in Parliament, Singapore Business review reports. The proposed legislation seeks to ban the publication of digitally altered or generated online election ads that portray a candidate as saying or doing something they did not actually say or do.
This comes as a recent Au10tix report reveals the APAC region has become highly susceptible to identity fraud, with AI-powered Fraud-as-a-Service (FaaS) driving a 1,530 percent surge in deepfake-related incidents.
In the realm of crypto wallet security, Gen Digital, a crypto wallet security software firm, reports that malicious actors have significantly increased AI-powered deepfake scams targeting crypto holders in the second quarter of 2024. A spokesperson informed Trading View that this attack method could be exploited to deceive wallets using facial recognition, allowing hackers to gain access, and urged crypto community members to become more knowledgeable on how these attacks work.
In a media landscape where credibility is crucial, Ghanaian journalists are actively training to detect AI-generated content and deepfakes to ensure accurate reporting. This initiative aims to prevent the spread of misinformation in a region where media plays a key role in societal stability, VOA reveals.
Looking ahead, security firms predict that AI deepfake attacks will soon extend beyond video and audio, potentially infiltrating text and other digital formats.
Article Topics
biometric identification | biometric liveness detection | biometrics | deepfake detection | fraud prevention | identity verification | spoof detection
Comments