FB pixel

Behavioral Signals brings novel approach to audio deepfake detection

Behavioral Signals brings novel approach to audio deepfake detection
 

Deepfakes have advanced beyond the capability of leading software tools using vocal biomarkers to detect them. Fortunately, behavioral biometrics and analysis provide a way to understand sentiment that can be used to identify speech generated or manipulated by AI, Behavioral Signals CEO Rana Gujral tells Biometric Update in an interview.

Behavioral Signals started as a project within SAIL (Signal Analysis and Interpretation Labs) examining concepts like emotion and behavior.

“There are a lot of sentiment AI tools out there, and they’re all trying to understand the sentiment, or how a human’s feeling, but they’re doing it with such rudimentary methods.”

Those rudimentary methods include converting speech to text and then parsing the meaning of the words. When the company was founded the accuracy of those tools was very low, Gujral says, and it “still is for the most part.”

Behavioral Signals took a different approach based “on understanding how something is being said” from signals like tone of voice, intonations and prosody (pitch). They are fed to behavioral signal processing (BSP) engines, which extract the signal and process it in real time. Then the company’s researchers added other aspects of behavior like engagement, empathy and politeness to further refine their understanding of speech behavior.

Early applications were found around matching call center agents and clients and optimizing for KPIs. Then the CIA’s investment arm In-Q-Tel came knocking, Gujral recounts.

Four years and multiple investment rounds later, Behavioral Signals has years of experience developing its behavior profiling technology, including in classified work with U.S. agencies.

That work led to a new focus, on a problem alarming both the public and private sector.

“Deepfake detection has been a problem that’s been percolating under the radar for quite some time, and now it’s just busted out in the open and is somewhat out of control,” Gujral says.

The 2025 Deepfake Detection Market Report and Buyers Guide from Biometric Update and Goode Intelligence forecasts that voice deepfake checks will triple to surpass more than 4.8 billion by 2027.

Detecting authenticity

Most algorithms for detecting audio deepfakes today depends on the detection of vocal biomarkers, according to Gujral. These subtle, naturally occurring physical characteristics, such as micro-pauses, pitch variation, throat resonance, slight jitters and infrequency from motion or breathing give away the presence of a real human. Their absence, traditionally, gives away a deepfake.

But not anymore.  The assumption that synthetic speech could not consistently replicate these biomarkers held for a time, but Gujral says state-of-the-art systems today can synthesize the speech and then insert biomarkers convincing enough to fool the best-performing leaders in voice liveness and deepfake detection.

“That’s the big problem, and nobody’s talking about it,” he says. “The fact is that the vast majority of these tools do not work.”

Behavioral Signals uses its own foundational AI models, as opposed to software “wrapped over anything else,” and used it to develop the deep tech it uses for deepfake detection.

“We realized that there’s got to be a better approach, and we started thinking about this behavioral mapping technique.”

By contrast, Gujral says most of the voice deepfake detection systems on the market were born out of repurposed voice biometrics models. Their ineffectiveness is rarely referred to except in hushed tones, but Gujral argues that it can be seen between the lines of other stories, such as the deepfake of U.S. Secretary of State Marco Rubio that evaded detection for eight months.

Behavioral Signals’ voice tone recognition started out less effective than the standard person’s, but as transformers and relational neural networks were added to the model, it surpassed human parity. “Now we’re around mid-nineties, which is better than a human will ever be. And then things become really exiting,” Gujral says.

It does this not by looking for typical human vocal patterns but “builds a personalized fingerprint of the speaker of interest and how the speaker communicates.” That includes how tone shifts mid-sentence, preferred spacing, pause patterns, phenom level intonation habits and more. The behavioral profile can be learned ahead or identified in real-time, and any departures from it leads to an alert.

“From a model architectural standpoint, this means moving beyond the typical CNN to more dynamic models like transformer encoders, temporal attention networks or even hybrid setups like LSTMs that track longer term dependencies in speech.”

‘Bigger than deepfakes’

The behavioral profiles for pre-trained, speaker-specific audio deepfake detection are an implementation of behavioral biometrics. Speaker-independent deepfake detection works more like how behavioral analytics are applied to fraud detection, the CEO explains.

“We would say when you express certain things in these manners, other things are also expressed in these manners.”

How Behavioral Signals’ software arrives at its conclusions is presented with white-box explainability, as Gujral showed Biometric Update in a demonstration. The portion of the recording that generated the alert was highlighted, and assessment scores for different characteristics of that speech provided

The capability is embedded into workflows through an API which provides “full forensics,” including the ability to identify possible co-conspirators who may be on the call to aid the deception by feeding the fraudulent identity specific lines. Alternatively the software can be deployed on-premises or in an air-gapped environment to meet regulator requirements.

The software can also provide “real-time speaker diarization,” Gujral says, demographic analysis, emotion recognition and a range of related capabilities.

“More importantly,” he says, “this part is foolproof because there are no known real generators that can create an audio that can fool this.”

He believes competitors models, however, cannot detect the best ones. The implications for digital trust are vast. “They’re the leading players out there, they’re selling this technology and the fact is that it doesn’t work.”

One of the use cases Behavioral Signals’ deepfake detection software is built for is building a service for a high-value client like a celebrity of company executive who has a reputation to protect. Every video or other piece of content including that person can be analyzed for authenticity to enable takedown requests for fake or manipulated content.

“It has to be really cheap, and effective, and run at scale,” to be effective at this task, Gujral says. “Our system does that.”

Behavioral Signals has customers among government agencies and enterprises, including in the Fortune 500. They use the company’s technology for a range of applications, from assessing call-center communication to deepfake detection, and even more capabilities are in development.

“We are focussed on understanding authenticity in human interactions,” Gujral says. “And that’s a multi-modal problem, and it’s bigger than deepfakes.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics projects scale to meet great expectations, from borders to payments

Biometrics projects are graduating to production, reaching scale milestones and expanding dramatically in the top stories of the week on…

 

ICE using data and probability to decide where to detain and arrest people

U.S. Immigration and Customs Enforcement’s Enhanced Leads Identification & Targeting for Enforcement (ELITE) tool is being used to identify “targets”…

 

In AI era, identity is about governance, Microblink’s Hartley Thompson tells BU Podcast

“One of the defining things in my life is change,” says Hartley Thompson of Microblink. “How do you react to…

 

CLR Labs wins funding to support biometrics, IAD, digital wallet standardization

Cabinet Louis Reynaud (CLR Labs) has won funding from a French government program to support its standardization efforts in biometrics,…

 

Checkr crossed $800M gross in 2025 as biometric background checks expand

Biometric background check provider Checkr is celebrating 2025 as its most successful year ever, with gross revenue surpassing $800 million…

 

Identity and risk infrastructure startup secures $12M for Europe, LATAM expansion

Monnai, which provides identity and risk data infrastructure, has announced a 12 million dollar equity funding round led by Motive…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events