FB pixel

Deepfake research is growing and so is investment in companies that fight it

Deepfake research is growing and so is investment in companies that fight it
 

AI-generated content such as deepfakes is facing increasing scrutiny. Three new research resources – authored by search giant Google, identity verification company Onfido and political encyclopedia Ballotpedia – examine why we should be concerned about the technology, analyzing its biggest risks, its influence on elections in the UK and rising regulation against deepfakes in the U.S.

Meanwhile, investment is flowing into companies that are developing tools to fight generative AI threats, including Loti AI and GetReal Labs.

Google: Politics are the main reason for misusing generative AI

Manipulating human likeness, such as creating deepfake images, video and audio of people, has become the most common tactic for misusing generative AI, a new study from Google reveals. The most common reason to misuse the technology is to influence public opinion – including swaying political opinion – but it is also finding its way in scams, frauds or other means of generating profit.

The paper was published by researchers working at Google’s AI research lab DeepMind and its research and development unit Jigsaw. The research surveyed scientific papers as well as 200 media reports on misusing generative AI systems which were published between January 2023 and March 2024.

According to the research,d generative AI was created to influence public opinion in 27 percent of cases. This does not only include media designed to sway election campaigns: Charged synthetic images have been cropping up around politically divisive topics, such as war, societal unrest or economic decline.

“The increased sophistication, availability and accessibility of GenAI tools seemingly introduces new and lower-level forms of misuse that are neither overtly malicious nor explicitly violate these tools’ terms of services, but still have concerning ethical ramifications,” the report notes. “These include the emergence of new forms of communications for political outreach, self-promotion and advocacy that blur the lines between authenticity and deception.”

Monetization of products and services, such as low-quality AI-generated articles, books and products as well as sexually explicit material, was the second top reason for misusing the technology, accounting for 21 percent of reported cases. Third place went to scams and fraud.

Impersonations of celebrities or public figures, for instance, are often used in investment scams while AI-generated media can also be generated to bypass identity verification and conduct blackmail, sextortion and phishing scams.

As the primary data is media reports, the researchers warn that the perception of AI-generated misuse may be skewed to the ones that attract headlines. But despite concerns that sophisticated or state-sponsored actors will use generative AI, many of the cases of misuse were found to rely on popular tools that require minimal technical skills.

Two-thirds of Brits concerned over deepfakes in elections: Onfido

The UK is currently preparing for its general elections on July 4th that will usher in a new prime minister. Over 40 percent of UK citizens, however, believe that voters could be swayed by deepfakes, a new survey conducted by Onfido has shown.

Brits are becoming increasingly skeptical online with almost a quarter (23 percent) saying they do not trust any political content on social media. The skepticism may be warranted: In January, more than 100 deepfake video ads impersonating Prime Minister Rishi Sunak cropped up on Facebook, reaching as many as 400,000 people.

The survey, which queried the opinions of over 2,000 UK adults, found that 64 percent of voters are not confident they could tell whether political online audio or video content is fake. AI could help reduce the prevalence of deepfakes and restore some trust in online interactions, says Onfido Global Policy Director Aled Lloyd Owen.

“AI has a pivotal role in the defense against malicious deepfakes,” he says. “It can recognize subtle differences in content that are often imperceptible to the human eye, while it can scale prevention based on demand.

Onfido has previously raised concern over deepfakes, calling for more attention from lawmakers to generative AI use in impersonations, frauds and scams. Generative AI use, however, may cost Brits more than just money: 66 percent of them believe that deepfake and fake news could seriously harm democracy in the UK.

Investment pours into anti-deepfakea solutions

With the threat of deepfakes becoming widespread, some companies are coming up with novel solutions that protect images online.

Loti AI offers protection for public figures by detecting and removing their deepfakes and impersonator accounts from social media. The startup claims that it scans over 100 million images and videos a day and removes unauthorized content with 95 percent effectiveness.

Last week, the Cook Islands-registered company announced seed funding of US$5.15 million led by Seattle, U.S.-based venture capital firm FUSE, with participation from Bling Capital, K5 Tokyo Black, Ensemble, and AlphaEdison.

Aside from deepfakes and fake accounts, Loti AI also targets fake endorsements and unlicensed distribution of content.

Another new company taking on the scourge of deepfakes is GetReal Labs. The company was incubated by Californian cybersecurity-focused venture capital firm Ballistic Ventures

The startup was co-founded by Ted Schlein, general partner at Ballistic Ventures, and Hany Farid, a digital forensics expert and a professor at UC Berkeley involved in the Berkeley Artificial Intelligence Lab, the Institute for Data Science and the Center for Innovation in Vision and Optics.

The company focuses on protecting organizations, including financial and media companies and government agencies, from manipulated media such as voice and video.

Aside from Ballistic Ventures, GetReal is also backed by Venrock.

US introduced almost 300 bills to fight deepfakes

Regulation against deepfakes is growing: The U.S. currently has 294 bills attempting to regulate AI-generated content, according to new data provided by Ballotpedia.

The nonprofit organization which aims to provide trusted information on the U.S. elections has released detailed data related to deepfake regulation in its new AI Deepfake Legislation Tracker, covering all 50 states. According to its data, the number of bills in this space has risen 28 percent on average each year between 2019 and 2023.

Ballotpedia has also published a State of Deepfake Legislation 2024 Annual Report.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

CFIT pushes efforts on digital company ID to tackle economic crime in the UK

The UK’s Centre for Finance, Innovation and Technology (CFIT) has unveiled progress by its coalition of financial institutions, regulators, and…

 

iProov, iiDENTIFii help Standard Bank create network of trust

It’s one thing to know your customer, and another thing to know your customer is real. As GenAI becomes a…

 

World to spend $26B on IDV checks by 2029: Juniper

By 2029, the total global spend for digital identity verification checks will spike by 74 percent to reach $26 billion,…

 

Regula to replace SumSub as face biometrics provider for Maldives

Regula Forensics has been granted the contract to provide face recognition for the Maldives’ national digital identity, eFaas, after the…

 

UK student IDs now supported by Yoti digital identity apps

Yoti has added support for school IDs to its digital ID apps so students can more easily prove their status…

 

Ecommerce is losing money to fraud – and looking towards biometrics

Fraud losses continue to plague ecommerce and online payments, with Juniper releasing the latest sobering statistics on merchant losses. Behavioral…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events