Deepfakes get more attention, but code keeps getting more dangerous
IT has a habit of throwing parties that the world stampedes or ignores for as long as possible.
People rushed for smartphones but almost anything has been able to distract from news about deepfakes.
Now, a second very-high profile case of political deepfakes, this one in Venezuela, has been discovered, and a modicum of attention is being paid to the algorithms.
It might help reverse several years of people in leadership positions around the world who have dismissed the algorithms as parlor tricks and malicious pranks that tomorrow’s technology will make obsolete.
National United States newscaster ABC News is reporting that lawmakers in the state of Washington are preparing to give political candidates recourse in civil court if they are victimized by any form of deepfakes.
It is unusual for a U.S. national broadcaster to report on a bill passing the upper house of a distant, largely rural state.
State senators have passed to the lower house a bill designed to provide a bare minimal level of protection from false voice, face or other biometric content designed to derail a campaign. Of course, by the time someone has prevailed in court after a synthetic media, the politician might be hiding from enraged, armed political partisans.
According to ABC, the proposal was not decisively voted out of committee. An early version of the Senate bill failed before ultimately passing out of committee 35-15. A similar bill is moving through the House of Representatives. A legislative analysis of it is here.
About a week prior to that report came a breathless deepfakes article from Vanity Fair, an esteemed and storied U.S. magazine of and for a socialite class who pride themselves on how long they can snigger at new electronics and IT before buying into them.
Vanity Fair’s headline: “This will be dangerous in elections.” The piece says “political media’s” next hurdle will be dealing with deepfakes. Of course, that is a true assessment, but it is narrow.
The electorate will almost certainly see a political deepfake before a pundit can bloviate about it. Voice deepfakes already have defrauded businesses and consumers.
Vanity Fair’s editorial style mandates printing the names of famous and powerful people in bold, and it is both unsettling and encouraging to see it. It’s another shoe to drop for people who have been watching deepfakes evolve into a threat. But it also can be reassuring that sometimes the sky really is in danger of falling.
Then, not long after ABC’s and Vanity Fair’s deadlines came word that a pair of political videos were posted in Venezuela. One was laudatory of the nation’s president, Nicolás Maduro, the second accused Maduro opponents in government of mismanaging $152 million.
Neither were true. Both were read by the deepfake avatars of actors with synthetic U.S. accents, according to The Irish Times.
Five YouTube accounts were suspended by company officials, but that step, like the idea of suing people wronged by malicious deepfakes created using the Synthesia AI video platform, was largely moot.
AI | biometrics | deepfake detection | deepfakes | elections | fraud prevention | Synthesis AI