Deepfake detection loses accuracy somewhere between your brain and your mouth
A team of neuroscientists researching deepfakes at the University of Sydney have found that people seem to be able to identify them at a subconscious level more frequently than at a conscious one.
A new paper titled ‘Are you for real? Decoding realistic AI-generated faces from neural activity’ describes how brain activity reflects whether a presented image is real or fake more accurately than simply asking the person.
The researchers used electroencephalography (EEG) signals to measure the neurological responses of subjects, and found a consistent neural response associated with faces for approximately 170 milliseconds. When the face was real, and only then, the response was sustained beyond 400ms.
The same phenomenon has been observed with real and fake faces as far back as 2017, the researchers note, but had not been tested with realistic deepfakes.
Brain activity indicated the presented image was a deepfake with 54 percent accuracy, but when asked to say whether the image was fake allowed, accuracy fell to 37 percent.
“Understanding this difference between brain and behavioral responses may be key in determining the ‘real’ in our new reality,” the researchers write.
Associate Professor Thomas Carlson told a Sydney University website that the findings could be used to help train algorithms to detect deepfakes, though the implications for training humans could be just as significant.
Biometrics and computer vision experts have found that highly realistic deepfakes can be detected by algorithms with an encouraging degree of accuracy, but also that some lower-quality deepfakes are likely to evade detection by automated systems.
Article Topics
AI | biometrics | computer vision | deepfake detection | deepfakes | synthetic faces
Comments