FB pixel

Deepfakes are a lurking ghost, with many unaware of increasing risk

Santander research shows low awareness as Beyond Identity, Veridas, Corsound raise alarm
Deepfakes are a lurking ghost, with many unaware of increasing risk
 

For those in the biometric trenches, deepfakes may seem like an ever-present threat, or at least an effective and unavoidable boogeyman. But new research from Santander says more than half of average people in Britain have never even heard the term, do not understand what the term means, or report being confused by it. Just 17 percent say they could identify a deepfake video.

To try and raise awareness, Santander has teamed up with influencer Mr. MoneyJar to create fake videos featuring the financial advisor alongside Santander fraud lead Chris Ainsley, aiming to showcase how good deepfakes have gotten.

“Generative AI is developing at breakneck speed, and we know it’s ‘when’ rather than ‘if’ we start to see an influx of scams with deepfakes lurking behind them,” Ainsley says. “We already know fraudsters flood social media with fake investment opportunities and bogus love interests, and unfortunately, it’s highly likely that deepfakes will begin to be used to create even more convincing scams of these types. If something – or someone – appears too good to be true, it’s probably just that.”

While many Brits have their head in the (possibly fake) sand, there are plenty who are worried. More than a third of over a third say they have knowingly watched a deepfake, most of them on social media. The biggest worry is that deepfakes will be used in financial scams. Other worries are deepfakes being used to nefariously influence elections, and being used to simulate biometric data. Six in 10 people (59 percent) say the threat of deepfakes has made them more suspicious of what they see or hear.

Sometimes, says Mr. MoneyJar – AKA financial advisor Tim Merriman-Johnson – common sense is the best defense.

“People don’t tend to broadcast lucrative investment opportunities on the internet,” he says. “If you are ever in doubt as to whether a company or individual is legitimate, you can always search for them on the Financial Conduct Authority Register.”

Santander’s tips for spotting deepfakes double down on that point, especially as visual artifacts that still currently appear in deepfakes get refined out of the process, making visual detection with the naked eye effectively impossible. Know the telltale signs of scams and which types are commonly used for fraud, such as investment scams, impersonation fraud and romance scams.

“At some point, deepfakes will become impossible to distinguish from real videos, so context is important. Ask yourself the same common-sense questions you do now. Is this too good to be true? If this is real, why isn’t everybody doing this? If this is legitimate, why are they asking me to lie to my family and/or bank?”

Beyond Identity offers a RealityCheck for deepfakes on Zoom

Beyond Identity has launched RealityCheck, an identity assurance plugin for Zoom. According to a press release, RealityCheck (a refreshingly clever name in a sea of similar-sounding products) protects organizations from AI-assisted fraud such as impersonation attacks and deepfakes, by certifying the authenticity of Zoom call participants using Authenticator Assurance Level 3 (AAL3) and device security verification.

Jasson Casey, CEO of Beyond Identity, says that the rapid rise of deepfakes has made it more urgent for businesses to equip themselves with fraud protection. Evoking recent deepfake cases that led to “devastating results” for the affected parties, he says RealityCheck “focuses on the prevention of AI impersonation attacks and deepfakes in video conferencing applications,” calling it “the first tool developed to purposely address this new type of attack.”

“Many organizations do not have in place cybersecurity strategies to combat AI deception attacks,” he says. “They are further challenged by the fact that most deepfake detection tools and end-user training are probabilistic and cannot offer solid guarantees.” RealityCheck, he says, “shifts the focus to authentication assurances to make deterministic claims.”

Once it is up and running, the tool applies a badge of dynamic authentication to a user’s camera layer on a Zoom call and displays a side panel with additional data about device and user risk. It also ensures devices meet organizational security standards and continuously verifies and authenticates both users and devices.

RealityCheck for Zoom is embedded in Beyond Identity’s Secure Access platform. In addition to preventing fraud, it can be used for ID verification in onboarding, “by delivering verification that the employee is actually on an authorized device and strongly authenticated with phishing-resistant MFA when verifying identity documents over Zoom.”

Veridas secures firm positioning in U.S. market, rolls out Voice Shield

Spanish biometrics provider Veridas has announced what a release calls “significant strides in its expansion within the U.S. market,” as its voice authentication product gains pickup in call centers and it rolls out its latest voice fraud prevention tool, Voice Shield, with the support of Scaled Ventures.

Voice Shield, it says, is capable of analyzing voice data in milliseconds to provide real-time voice authentication through the use of secure and unchangeable templates that do not store biometric data. It operates effectively regardless of language or script, which makes it optimal for international markets. It requires no pre-enrolment or registration, providing invisible protection that “does not affect the conversion funnel.”

Kevin Vreeland, North America General Manager at Veridas, notes that the firm has seen “an incredible surge in our voice authentication customers, growing fivefold in just the past three months.”

The company’s collaborations with Scaled Ventures look to multiply that number further, as Veridas aims to solidify its position in the U.S. market. Vreeland calls them “significant milestones for Veridas.”

“Our voice biometrics and deepfake detection software are changing the rules for good, providing unparalleled security and convenience,” he says. Noting that Veridas has been “crafting and refining these solutions for years as a vital part of our core offerings,” he says the launch of Voice Shield and the Scaled Venture projects make Veridas “more equipped than ever to protect companies and end-users from the evolving threats of fraud, particularly those arising from the misuse of generative AI and deepfakes.”

Corsound AI paper goes deep on voice deepfake detection

Tel Aviv startup Corsound AI, which serves customers in law enforcement, banking and finance, has released a new white paper entitled “How to prevent identity fraud with complete voice deepfake protection.” Promising “insights for financial institutions, banks, telecoms and law enforcement, (among others) for safeguarding data, financial assets and individuals,” the paper further emphasizes the scale of the emerging deepfake threat.

“The significant risk posed by voice deepfake-powered fraud cannot be understated,” it says. “In the U.S. alone the number of cases rose by 3,000 percent in 2023 over 2022. And when it comes to voice deepfake-powered fraud 37 percent of organizations worldwide have already been hit.”

Voice cloning is a favorite of fraudsters, says the paper, because of its relatively low cost, its effectiveness at lower resolutions, and the maturity of the technology – all of which make detection tough.

The paper outlines defense strategies that can help vulnerable entities arm themselves against deepfakes. These include voice intelligence and voice-to-face matching, the latter being an algorithm that promises to be able to match a voice to a particular face, based on tech developed by researchers at MIT’S Computer Science and Artificial Intelligence Laboratory (CSAIL). Corsound AI, the paper notes, presently “constitutes the only offering to provide the critical capability of voice-to-face matching for determining to which face a voice most likely belongs.”

Founded in 2021, Corsound is a subsidiary of Cortica, parent company of facial recognition and analysis developer Corsight AI.

Startup is the Dr. Manhattan of biometrics, seeing and knowing all

If hyper realistic video deepfakes get us in a corner, Israel-based startup Revealense has a potential solution. A piece in NoCamels quotes Revealense CPO Amit Cohen, who says the firm “analyzes human behavior to help other people make the best decisions about humans.”

Its proprietary technology claims to monitor subtle, unconscious behaviors that are “hard to control when interacting with other people,” such as facial movements, voice pitch and heart rate – microbiometrics, if you will, that are indicators of “cognitive and emotional stress.” Per its website, it “operates on non-intrusive video streams and is designed to fit into a wide range of applications that demand verification of human authenticity, data integrity, and exceptional precision.”

“Our advanced deep learning neural network evaluates human factors by engaging with the human nervous system about its surroundings, allowing us to accurately assess the mental state and verify information with high precision,” the firm says.

The pitch is to the homeland security and mental health care sectors.

“There’s a lot of similarity between homeland security and mental care,” says Cohen. “You want to catch the bad guys and you want to help the good guys who need help. And in these two worlds, you have to be very, very accurate – there’s no room for mistakes.”

In a sense, Revealense is a perfect foil for the army of invisible fraudsters silently injecting deepfake content into our video and audio interactions – not because it is does what it says, but because it has come on something like a covert organization from a James Bond film, popping up recently with a cache of classified security clients in place. Half the company’s team are trained psychologists. The other half, so to speak, is a retired Israel Defense Forces general, “the advisor for the security applications of the platform.”

Cohen promises Revealense is based on “responsible AI” and makes bold claims for the platform’s accuracy. “The bottom line is that the parasympathetic nervous system can never lie, can never be deceptive,” he says. “This is something that is inherent in us all.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Saudi Arabia’s Absher digital identity for financial inclusion and transactions

The Absher platform in the Kingdom of Saudi Arabia has emerged as the core pillar of the country’s efforts towards…

 

Malawi begins biometric voter registration pilot to test new system

A trial voter registration process will begin in Malawi tomorrow September 13 to put the country’s new Electoral Management Device…

 

Biometrics pilots, launches and investments foreshadow next areas for growth

Biometrics pilots, a patent, predictions and acquisitions paint a picture in the most popular news items of the week on…

 

Biometrics firms pitch privacy in age assurance ahead of US court battle

The U.S. is facing its first constitutional debate connected with age verification in 20 years: The Supreme Court will have…

 

Google announces beta test for digital IDs based on biometrics and US passports

A new type of digital ID based on U.S. passports in Google Wallet has been introduced ahead of beta testing….

 

Permira finalizes $1.3B majority stake acquisition of BioCatch

Permira Growth Opportunities has completed the acquisition of a majority position in behavioral biometrics and fraud prevention business BioCatch, four…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events