FB pixel

Concern over deepfakes, social media attitudes grows ahead of US elections

 

artificial-intelligence

As the 2020 US presidential election inches closer, new efforts are underway to fight what’s expected to be an increase in deepfakes for political and other purposes, and the impact they could have on the voting public.

Indeed, studies show not only a lack of concern on the part of the media-consuming public, but also the misplaced belief by many that they can actually distinguish between reality and a deepfake video, sound bite, or even an AI-generated fake news story. Some of this misplaced trust could be driven by the apparent overall acceptance of the use of facial recognition technology by law enforcement and other security measures, authorities suggested.

One senior intelligence official explained to Biometric Update on background that “there’s some obvious concern we have, behaviorally speaking, that acceptance of these capabilities – facial recognition, we’re talking about – could translate into an individual not really understanding the difference between it and an altered or generated [deepfake] video or image of someone. I can see where there could be confusion.”

He and other authorities explained that the acceptance by some — and among others the lack of understanding about — facial recognition could blur their differentiation between it and the technology and nature of deepfakes, leading to a kind of malaise toward the latter.

“This kind of laisse fair attitude could seep into complacency toward deepfakes – seeing them all as one technology,” he posited.

He pointed to a new Pew Research Center survey that found 56 percent of surveyed Americans trust law enforcement agencies to use these technologies responsibly, while 59 percent said it is acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces.

However, 86 percent said they’d “heard at least something about facial recognition technology, while “just 13 percent [had] not heard anything about facial recognition.”

“Overall awareness” fell to 79 percent among those with a high school diploma or less.

Compare this to the majority of social media users – 97 percent – who said they believe they can easily spot fake news in a new survey by The Manifest. Authorities have expressed concern though about whether they can, in fact, recognize artificially created content, especially “as technology creates increasingly sophisticated deceptive content.”

The survey found that more than half of respondents had seen fake news on Facebook (70 percent) and Twitter (54 percent) in August, while many also saw fake news on YouTube (47 percent), Reddit (43 percent), and Instagram (40 percent).

The 2018 Edelman TRUST BAROMETER however, found that 63 percent of those surveyed said they actually have a difficult time distinguishing between real news and fake news.

Perhaps more disturbingly, though, The Manifest reported, “fake news doesn’t deter people from using social networks, even if they see it regularly. More than half of Facebook users (53 percent) say fake news doesn’t impact their use of the platform, and only 1 percent said they would delete Facebook because of fake news.”

A June 2016 study by Columbia University and the French National Institute found that 59 percent of so-called news that’s shared on social media is shared without it actually being read.

The study was performed after the satirical website, Science Post, published an article with the headline, “Study: 70% of Facebook Users Only Read the Headline of Science Stories Before Commenting,” which was shared 46,000 times without being read.

“People are more willing to share an article than read it. This is typical of modern information consumption,” said the study’s co-author, Arnaud Legout, in a statement. “People form an opinion based on a summary, or a summary of summaries, without making the effort to go deeper.”

The study abstract stated that “properties of clicks impact multiple aspects of information diffusion, all previously unknown,” and that “secondary resources, that are not promoted through headlines and are responsible for the long tail of content popularity, generate more clicks both in absolute and relative terms.”

“Fake news is more of an annoyance to social media users than a deterrent,” the study concluded. “At the end of the day, people really don’t truly care because if they did, they would need to make a change,” said Johnathan Dane, founder and CEO of Kilent Boost. “The habits have already been formed.”

It’s this dichotomy of acceptance, not being able to detect real from fake, and seeming indifference that worries experts.

“If they believe they can detect fake news – which an all actuality could just as easily be deepfakes – then how do they know the difference? This is concerning,” a government intelligence official told Biometric Update on background.

The official pointed to how Rudolph Giuliani, an attorney for President Donald Trump and former mayor of New York City, was apparently duped by a deekfake video of House Speaker Nancy Pelosi in late May, 2019. The video of Pelosi giving a speech during a public appearance had been doctored to make it seem she’d drunkenly slurred her words. Giuliani Tweeted the video, asking, “What’s wrong with Nancy Pelosi?” He later deleted the tweet.

“The ‘drunken’ speech video alone received more than three million views in a matter of days,” said the new report, Disinformation and the 2020 Election: How the Social Media Industry Should Prepare, noting that “deepfakes that are more difficult to refute could enjoy even wider circulation.”

The study, authored by Paul M. Barrett, deputy director New York University’s (NYU) Stern Center for Business and Human Rights, warned that “the Pelosi episode foreshadowed one type of disinformation that is likely to disrupt the 2020 election: deliberately distorted video, amplified via social media.”

While the report described the video as a “cheapfake,” it emphasized just how much damage can be accomplished using rudimentary technology.

According to the study, Arming the Public With Artificial Intelligence to Counter Social Bots, published in February funded by the Defense Advanced Research Projects Agency (DARPA), National Institutes of Health, and the Air Force Office of Scientific Research, “despite high awareness … many people are not confident in their ability to identify [deceptive] social bots … automated or semi‐automated accounts designed to impersonate humans [that] have been successfully exploited … to manipulate online conversations and opinions.”

Clearly, there are the reasons to be concerned about all of this, experts forewarn. And it’s these sorts of concerns that are driving factors underpinning all the speeded up initiatives to wrestle the burgeoning threat of deepfakes, a variety of government intelligence officials who spoke on background.

House Permanent Select Committee on Intelligence Chairman Adam Schiff earlier warned that “a timely, convincing deep fake video of a candidate could hijack a race – and even alter the course of history.”

Facebook just announced it has partnered with Microsoft, the Partnership on AI, and academia to develop a realistic dataset “to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer” as part of Facebook’s Deepfake Detection Challenge.

The Government Accountability Office (GAO), meanwhile, announced the launch of its Center for Strategic Foresight to explore “the management of space policy by government and the private sector, as well as the growing use worldwide of ‘deep fake’ synthetic media to manipulate online and real-world interactions.

Both efforts come on the heels of DARPA announcing its own initiative to counter the ever growing threat of “large-scale, automated disinformation attacks” by rapidly developing software that can sift through a test set of news stories, photos, and audio/video clips to identify and nullify deepfakes before they can spread.

The finding about Facebook users’ lethargic attitude toward fake news is important considering the prediction by another study that “digital voter suppression will again be one of the main goals of partisan disinformation,” and that “WhatsApp, the Facebook-owned messaging service, may be misused to provide a vector for false content,” and that it and Instagram will be “vehicles of choice” by entities “disseminat[ing] meme-based disinformation.”

Facebook has reported that it removed 1.5 billion fake accounts over 6 months in 2018 alone.

“With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences, said Emilio Ferrara, Assistant Research Professor and Associate Director of Applied Data Science at the University of Southern California‘s (USC) Department of Computer Science, Research Team Leader and Principal Investigator in the Machine Intelligence and Data Science group at USC’s Information Sciences Institute.

Ferrara made his comments in a statement accompanying a recent peer-reviewed study he was the lead author of which examined bot behavior in the 2016 and 2018 elections. The study found “bots or fake accounts enabled by artificial intelligence on Twitter have evolved and are now better able to copy human behaviors in order to avoid detection.”

The USC Information Sciences Institute was awarded a $5 million grant under DARPA’s SocialSim Challenges aimed at stimulating the spread of information on GitHub, Twitter and Reddit.

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms,” Ferrara said, noting that, “as social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content …”

The study, Evolution of Bot and Human Behavior During Elections, portends potentially dangerous artificial political realities like deepfakes which could then be spread by “an increasing number of automated accounts … extensively used to spread messages and manipulate the narratives others are exposed to.” Important in that “malicious bot accounts continuously evolve to escape detection”

During the 2018 US midterm elections, for example, the study said “bots changed the volume and the temporal dynamics of their online activity to better mimic humans and avoid detection. Our findings highlight the mutable nature of bots and illustrate the challenges to forecast their evolution.”

The recent NYU report said AI-generated deepfake videos, in particular, pose a serious threat to the 2020 election process.

The report warned that “realistic but fraudulent videos have the potential to undermine political candidates and exacerbate voter cynicism.”

“That’s extraordinary,” Barrett said.

“Imagine an adversary creating and posting deepfakes on social media, and then spreading them using bots,” one of the officials who spoke on background explained.

“While midterm election day in November 2018 did not feature much Russian interference, there is no guarantee that Russia, and possibly other US antagonists, will refrain from digital meddling in the more consequential 2020 contest. What’s more, in terms of sheer volume, domestically generated disinformation now exceeds malign content from foreign sources and will almost certainly be a factor in the next election,” the NYU report stated.

“Disruptive digital impersonations are coming, whether via hostile state actors or individuals. Every campaign should start preparing now,” jointly warned Katherine Charlet, inaugural director of Carnegie’s Technology and International Affairs Program, and Danielle Citron, vice president of the Cyber Civil Rights Initiative and a professor of law at Boston University School of Law. “It is only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020.”

“The key is in the timing,” they noted. “Imagine the night before an election; a deepfake is posted showing a candidate making controversial remarks. The deepfake could tip the election and undermine people’s faith in elections. This is not hypothetical.”

They said, “It does not matter that digital fakery can, for the present moment, be detected pretty easily. People have a visceral reaction to video and audio. They believe what their eyes and ears are telling them — even if all signs suggest that the video and audio content is fake. If the video and audio is provocative, then it will surely go viral. Studies show that people are ten times more likely to spread fake news than accurate stories because fakery evokes a stronger emotional reaction. So no matter how unbelievable deepfakes are, the damage will still be real … even if a deepfake appears weeks before an election, it can spread far and wide.”

With support from the VisMedia Project, Nick Diakopoulos, Professor of Communication. Computational journalism, algorithmic accountability and social computing at Northwestern University, recently posited a number of scenarios in which deepfakes could harm individual elections, and sow seeds of doubt about the election in general.

In one of the scenarios a foreign actor interferes in an election campaign “in a powerful way,” Diakopoulos pointed out. “The deepfake is not aimed at hurting or supporting a candidate, rather it is superficially aimed at voter suppression — superficial because the number of voters impacted would probably be quite small. Yet, once the public becomes aware of this activity, the broader effect would be to cast doubt on the legitimacy of the election by suggesting voter suppression while making it difficult to understand its extent … in addition to eroding trust in election outcomes.”

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics cycle from innovations to scale-up opportunities

Biometrics integrations range from the experimental to the everyday in the most-read articles of the week on Biometric Update. Yesterday’s…

 

US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like…

 

Airport authorities expand biometrics deployments with Thales, Idemia tech

Biometric deployments involving Thales, Idemia and Vision-Box, alongside agencies like the TSA,  highlight the aviation industry’s commitment to streamlining operations….

 

Age assurance laws for social media prove slippery

Age verification for social media remains a fluid issue across regions, as stakeholders argue their positions to courts and governments,…

 

ZeroBiometrics passes pioneering BixeLab biometric template protection test

ZeroBiometrics’ face biometrics software meets the specifications for template protection set out in the ISO/IEC 30136, according to a pioneering…

 

Apple patent filing aims for reuse of digital ID without sacrificing privacy

A patent filing from Apple for ensuring a presented reusable digital ID belongs to the person holding it via selfie…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events