FB pixel

Deepfakes get more attention, but code keeps getting more dangerous

Categories Biometric R&D  |  Biometrics News  |  Elections
Deepfakes get more attention, but code keeps getting more dangerous
 

IT has a habit of throwing parties that the world stampedes or ignores for as long as possible.

People rushed for smartphones but almost anything has been able to distract from news about deepfakes.

Now, a second very-high profile case of political deepfakes, this one in Venezuela, has been discovered, and a modicum of attention is being paid to the algorithms.

It might help reverse several years of people in leadership positions around the world who have dismissed the algorithms as parlor tricks and malicious pranks that tomorrow’s technology will make obsolete.

National United States newscaster ABC News is reporting that lawmakers in the state of Washington are preparing to give political candidates recourse in civil court if they are victimized by any form of deepfakes.

It is unusual for a U.S. national broadcaster to report on a bill passing the upper house of a distant, largely rural state.

State senators have passed to the lower house a bill designed to provide a bare minimal level of protection from false voice, face or other biometric content designed to derail a campaign. Of course, by the time someone has prevailed in court after a synthetic media, the politician might be hiding from enraged, armed political partisans.

According to ABC, the proposal was not decisively voted out of committee. An early version of the Senate bill failed before ultimately passing out of committee 35-15. A similar bill is moving through the House of Representatives. A legislative analysis of it is here.

About a week prior to that report came a breathless deepfakes article from Vanity Fair, an esteemed and storied U.S. magazine of and for a socialite class who pride themselves on how long they can snigger at new electronics and IT before buying into them.

Vanity Fair’s headline: “This will be dangerous in elections.” The piece says “political media’s” next hurdle will be dealing with deepfakes. Of course, that is a true assessment, but it is narrow.

The electorate will almost certainly see a political deepfake before a pundit can bloviate about it. Voice deepfakes already have defrauded businesses and consumers.

Vanity Fair’s editorial style mandates printing the names of famous and powerful people in bold, and it is both unsettling and encouraging to see it. It’s another shoe to drop for people who have been watching deepfakes evolve into a threat. But it also can be reassuring that sometimes the sky really is in danger of falling.

Then, not long after ABC’s and Vanity Fair’s deadlines came word that a pair of political videos were posted in Venezuela. One was laudatory of the nation’s president, Nicolás Maduro, the second accused Maduro opponents in government of mismanaging $152 million.

Neither were true. Both were read by the deepfake avatars of actors with synthetic U.S. accents, according to The Irish Times.

Five YouTube accounts were suspended by company officials, but that step, like the idea of suing people wronged by malicious deepfakes created using the Synthesia AI video platform, was largely moot.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events