Deepfake detection in cochlear implants could reduce fraud risk for hearing impaired

It is increasingly difficult for the human ear to recognize audio deepfakes. For the hearing-impaired who rely on cochlear implants (CI), it’s even harder – unless the implants have built-in deepfake detection.
“Deaf and hard-of-hearing populations, especially cochlear implant users, perceive audio in very different ways from hearing persons,” says Kevin Butler, a professor at the University of Florida’s Department of Computer and Information Science and Engineering (CISE), and director of the Florida Institute for Cybersecurity Research (FICS).
CIs assist hearing by converting sound into electrical signals that stimulate the auditory nerve. They prioritize speech-relevant frequencies and compress sound, which reduces nuances in aspects like pitch – meaning they could be more vulnerable to deepfake voice fraud.
The issue prompted a CISE student, Magdalena Pasternak, to research whether cochlear implants presented an increased risk of AI deepfake fraud for their users. The short answer, recently published in the paper “Characterizing the Impact of Audio Deepfakes in the Presence of Cochlear Implant Simulated Audio”, is yes.
A release from UF says “the study indicates the need for enhanced deepfake detection systems in implants for the hearing impaired.” Participants without CIs were able to identify deepfakes with 78 percent accuracy, while CI users achieved only 67 percent accuracy. CI users were twice as likely to misclassify deepfakes as real speech.
According to Butler, the CISE faculty lead on Pasternak’s study, the solution could be to build certain alert mechanisms directly into assistive devices.
Pasternak researches security in large language models (LLMs), deepfake detection and machine learning, and focuses on how the security community addresses the specific cybersecurity needs of vulnerable populations. She says “deepfake technology has advanced so rapidly in such a short span of time that many people are simply unaware of how sophisticated modern deepfakes have become. While education and awareness programs can play a huge role in protecting users, we must also develop technological solutions that accommodate diverse auditory processing abilities. By doing so, we can ensure that all users, especially those with auditory implants, have the necessary defenses against ever-evolving threats.”
Article Topics
biometrics | biometrics research | deepfake detection | deepfakes | machine learning | synthetic data | synthetic voice
Comments