Some world leaders take the hint and start building bulwarks against deepfake danger
The European Union, Reddit and Google are taking steps to do something about deepfakes. Exactly what they will accomplish is not clear.
The EU wants platform companies including Facebook and Twitter “to take measures to counter deepfakes” on their turfs, according to exclusive reporting by Reuters.
In fact, officials reportedly are changing a 2018 voluntary practice code for combatting disinformation to a co-regulation scheme that holds all signatories accountable. It also now spells out deepfakes as part of disinformation.
Signatories to the code include the EU, Google, Twitter, Facebook, Mozilla and advertising firms.
They will “adopt, reinforce and implement clear policies regarding impermissible manipulative behaviors and practices on their services, based on the latest evidence on conducts and tactics, techniques and procedures” behind malicious deepfake acts, according to Reuters.
The EU would use its Digital Services Act to fine companies up to six percent of global profits if they do not fulfil their obligations. They will have six months to get in gear.
As it happens, Google last month told DeepFaceLab users that they can no longer use Colab coding services to train deepfakes, according to trade publisher Unite AI. The publication describes DeepFaceLab as “notorious.”
Colab, designed as more of an educational tool for students, allowing them to run very large code projects on fast, high-band-width hardware without charge. The implication is that people were using the heavy-duty systems to create high-resolution deepfakes.
Reddit, once an outlaw posting service, has banned the r/deepfakesfw community. The suffix means safe for work.
Four years ago, Reddit shut down another deepfake forum, but that one was about making and viewing of forged video porn, according to Unite AI in another article.
The publication notes that Reddit has not banned r/DeepFakesSFW (double “s”) or r/SFWdeepfakes, which has many times the readers than the other two mentioned here.
It seems that what tripped Reddit’s wire was that people in r/deepfakesfw were using the forum to request custom banned porn.
Then there is computer vision company Paravision, which specializes in facial and activity recognition. This month, it posted a marketing essay on why the company is working to tackle deepfakes.
The post says that Paravision has partnered with “a Five Eyes government agency” to write deepfake-detection algorithms. The Five Eyes, a decades-old intelligence alliance, are Australia, Canada, New Zealand, the United Kingdom and the United States.
Members of the Five eyes are already sharing biometric data to boost national security.
The company last week said it had received funding by an unnamed partner of the Five Eyes to develop software to detect deepfake videos. Neither the size of the deal nor the name of the organization was disclosed.
All of the developments are good news for anyone worried about how much chaos could be unleashed by deepfakes. But, for the most part, the aims and methods are vague.
Even the likely punishments that the EU reportedly is ready to levy on its disinformation partners are fuzzy. Could Google get blindsided by a deepfake coder? Sure. It is not immune to any other kinds of cyber abuse.
Does that mean Google or its partners get fined because someone was cleverer than their employees? Must all code developed for this mission be open source?
Google shut down another deepfake community, but apparently for very specific reasons – because the community became a marketplace for porn.
And Paravision almost certainly will end up working on Chinese (OK, and North Korean) forgeries and counter-forgeries so much that much of its staff will soon be fluent in Cantonese.
Taken together, these developments feel like a net through which very dangerous content will swim.
Article Topics
biometrics | computer vision | deepfakes | EU | Five Eyes | Google | legislation | Paravision | regulation | research and development | social media
Comments