Deepfake problem studied in EU; Africa not immune

Horizon Europe, a diverse research program, has awarded University of Amsterdam (UvA) Professor Federica Russo a €2.6 million (US$2.81 million) grant to lead a study of the political risks of misinformation and deepfakes.
The Solaris study is scheduled to start in February.
“We will analyze political risks associated with these technologies to prevent negative implications for EU democracies. We want to establish regulatory innovations to detect and mitigate deepfake risks,” according to a UvA marketing brief.
Solaris will also assess value-based generative adversarial networks (GANs) as tools for improving citizen engagement by boosting their awareness on key global topics such as climate change, gender dimension and migration.”
Deepfakes under scrutiny in Africa
Deepfakes are worrying academics in Africa, too, according to a Forbes article published this week.
Professor Johan Steyn, a research fellow at the School of Data Science and Computational Thinking at Stellenbosch University in South Africa, says deepfakes pose legal and policy issues.
“How do you present evidence to a court of law when you cannot confirm if a video or voice is authentic? There’s almost no way of proving deepfakes are authentic,” Steyn tells Forbes.
In fact, he says, AI will increase the need for philosophers and ethicists.
“If you’re a critical thinker, fake news should be relatively easy to pick up. Deepfakes are more serious. What happens when a bank, for example, accepts voice as a proof of identity?”
Meanwhile, deepfake-connected crime already prowls Africa (and elsewhere), says Vladislav Tushkanov, a lead data scientist with Kaspersky Lab, a cybersecurity firm based in Russia.
Talking to Forbes, Tushkanov says tools exist to spot at least some deepfakes, and observant people can spot rudimentary forgeries. They can watch for jerking movement, shifts in lighting from one frame to the next, strange blinking and poorly synched lips.
Experts explain ways they spot video, audio deepfakes
A podcast by The Economist this week picked up the detection thread, too.
On it, University of Florida professor Patrick Traynor talked about a novel method to expose audio generated by artificial intelligence.
Also during the show, Intel’s senior research scientist Ilke Demir explained how to spot visual fakery by analyzing facial color changes. Wendy Betts of eyeWitness to Atrocities, a part of the International Bar Association, discussed how the organization fends off AI-adulteration of its digital evidence.
Article Topics
AI | biometrics | deepfake detection | deepfakes | research and development
Comments