House Intelligence Committee chairman praised Facebook policy on deepfakes
House Permanent Select Committee on Intelligence Chairman Rep. Adam Schiff (D-CA) said Facebook’s announcement this past week of its “new policy which will ban intentionally misleading deepfakes from its platforms is a sensible and responsible step, and I hope that others like YouTube and Twitter will follow suit.”
Schiff cautioned, however, that, “As with any new policy, it will be vital to see how it is implemented, and particularly whether Facebook can effectively detect deepfakes at the speed and scale required to prevent them from going viral,” emphasizing that “the damage done by a convincing deepfake, or a cruder piece of misinformation, is long-lasting, and not undone when the deception is exposed, making speedy takedowns the utmost priority.”
Schiff added he’ll “also be focused on how Facebook deals with other harmful disinformation like so-called ‘cheapfakes,’ which are not covered by this new policy because they are created with less sophisticated techniques but nonetheless purposefully and maliciously distort an existing piece of media.”
Not all lawmakers – or privacy rights advocates and groups — concerned about this problem, though, were as impressed as Schiff with Facebook’s new policy, Enforcing Against Manipulated Media, which was announcement by Facebook Vice President for Global Policy Management Monika Bickert only days before she testified last week before the House Committee on Energy and Commerce Subcommittee on Consumer Protection and Commerce hearing on, “Americans at Risk: Manipulation and Deception in the Digital Age.”
Subcommittee Chairwoman Rep. Jan Schakowsky (D-IL), chastised “Congress [for having] unfortunately taken a laissez faire approach to regulating unfair and deceptive practices online over the past decade and platforms have let them flourish,” the result of which has been “big tech failed to respond to the grave threat posed by deep-fakes, as evidenced by Facebook scrambling to announce a new policy that strikes me as wholly inadequate, since it would have done nothing to prevent the altered video of Speaker Pelosi that amassed millions of views and prompted no action by the online platform.”
Similarly, Democratic Presidential candidate Joe Biden’s spokesman Bill Russo stated, “Facebook’s announcement is not a policy meant to fix the very real problem of disinformation that is undermining face in our electoral process, but is instead an illusion of progress. Banning deepfakes should be an incredibly low floor in combating disinformation.”
Schakowsky and other subcommittee members didn’t seem much assuaged by either Bickert or the other witnesses who testified at the hearing that Facebook’s policy goes far enough.
She declared that, “Underlying all of this is Section 230 of the Communications Decency Act, which provided online platforms like Facebook a legal liability shield for 3rd party content. Many have argued that this liability shield resulted in online platforms not adequately policing their platforms, including online piracy and extremist content. Thus, here we are, with big tech wholly unprepared to tackle the challenges we face today,” which she described as “a topline concern for this subcommittee.” We “must protect consumers regardless of whether they are online or not. For too long, big tech has argued that ecommerce and digital platforms deserved special treatment and a light regulatory touch.”
In her opening statement, Schakowsky further noted that the Federal Trade Commission “works to protect Americans from many unfair and deceptive practices, but a lack of resources, authority, and even a lack of will has left many American consumers feeling helpless in the digital world. Adding to that feeling of helplessness, new technologies are increasing the scope and scale of the problem. Deepfakes, manipulated video, dark patterns, bots, and other technologies are hurting us in direct and indirect ways.”
“People share millions of photos and videos on Facebook every day, creating some of the most compelling and creative visuals on our platform,” Bickert said in announcing Facebook’s policy, but conceded “some of that content is manipulated, often for benign reasons, like making a video sharper or audio more clear. But there are people who engage in media manipulation in order to mislead,” and these “manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or ‘deep learning’ techniques to create videos that distort reality – usually called deepfakes.”
“While these videos are still rare on the Internet” Bickert said, “they [nevertheless] present a significant challenge for our industry and society as their use increases.”
“As we enter 2020, the problem of disinformation, and how it can spread rapidly on social media, is a central and continuing national security concern, and a real threat to the health of our democracy,” Schiff said, noting that “for more than a year, I’ve been pushing government agencies and tech companies to recognize and take action against the next wave of disinformation that could come in the form of ‘deepfakes’ — AI-generated video, audio, and images that are difficult or impossible to distinguish from real thing.”
Schiff pointed to experts who testified during an open hearing of the Intelligence Committee last year that “the technology to create deepfakes is advancing rapidly and widely available to state and non-state actors, and has already been used to target private individuals …”
Schiff said in his response to Facebook’s policy that he intends “to continue to work with government agencies and the private sector to advance policies and legislation to make sure we’re ready for the next wave of disinformation online, including by improving detection technologies, something which the recently passed Intelligence Authorization Act facilitates with a new prize competition,” which Biometric Update earlier reported on.
Bickert said Facebook’s “approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts,” underscoring that “collaboration is key. Across the world, we’ve been driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform our policy development and improve the science of detecting manipulated media,” and, “as a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes.”
“Going forward,” she stated, Facebook “will remove misleading manipulated media if it meets the specific detailed criteria she briefly outlined in announcing the social media giant’s new policy.
She described criteria as applying specifically to content which “has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say, and, it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
However, she called attention to the fact that the new policy “does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words,” highlighting that, “consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression, and hate speech.”
She further stated that “videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages,” under the new Facebook policy. And, “If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed, and reject it if it’s being run as an ad.”
“And, critically,” she stressed, “people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
Bickert said the company believes that “this approach is critical to our strategy, and one we heard specifically from our conversations with experts,” exclaiming that “if we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the Internet or social media ecosystem.” Thus, she expressed, “by leaving them up and labelling them as false, we’re providing people with important information and context.”
“Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts,” she continued, pointing out that, “Just last month, we identified and removed a network using AI-generated photos to conceal their fake accounts,” and Facebook “teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior.”
“We are also engaged in the identification of manipulated content, of which deepfakes are the most challenging to detect,” she continued, explaining “that’s why last September we launched the Deep Fake Detection Challenge, which has spurred people from all over the world to produce more research and open source tools to detect deepfakes.”
Meanwhile, in a separate effort by Facebook, the company has “partnered with Reuters, the world’s largest multimedia news provider, to help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course,” Bickert adding, noting that “news organizations increasingly rely on third parties for large volumes of images and video, and identifying manipulated visuals is a significant challenge. This program aims to support newsrooms trying to do this work.”
She concluded by saying that, “As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact.”
“Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation,” House Speaker Nancy Pelosi Deputy Chief of Staff Drew Hammill responded in a tweet.
Facebook was roundly chastised for seemingly only to be concerned about deepfake videos rather than all the other tech that’s been used – and admitted by Facebook — to manipulate audio and text that’s also deliberately meant to deceive viewers and readers.
“Consider the scale. Facebook has more than 2.7 billion users, more than the number of followers of Christianity. YouTube has north of 2 billion users, more than the followers of Islam. Tech platforms arguably have more psychological influence over two billion people’s daily thoughts and actions when considering that millions of people spend hours per day within the social world that tech has created, checking hundreds of times a day,” the subcommittee heard from Center for Humane Technology President and Co-Founder Tristan Harris.
“In several developing countries like the Philippines, Facebook has 100 percent penetration. Philippines journalist Maria Ressa calls it the first ‘Facebook nation.’ But what happens when infrastructure is left completely unprotected, and vast harms emerge as a product of tech companies’ direct operation and profit?”
Declaring that “social organs of society [are] left open for deception, Harris warned that “these private companies have become the eyes, ears, and mouth by which we each navigate, communicate and make sense of the world. Technology companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology and even habit-formation processes, without proper responsibility.”
“Technology,” he said, “has become the filter by which we are experiencing and making sense of the real world,” and, “in so doing, technology has directly led to the many failures and problems that we are all seeing: fake news, addiction, polarization, social isolation, declining teen mental health, conspiracy thinking, erosion of trust, breakdown of truth.”
“But, while social media platforms have become our cultural and psychological infrastructure on which society works, commercial technology companies have failed to mitigate deception on their own platforms from deception,” Harris direly warned. “Imagine a nuclear power industry creating the energy grid infrastructure we all rely on, without taking responsibility for nuclear waste, grid failures, or making sufficient investments to protect it from cyber attacks. And then, claiming that we are personally responsible for buying radiation kits to protect ourselves from possible nuclear meltdowns.”
“By taking over more and more of the ‘organs’ needed for society to function, social media has become the de facto psychological infrastructure that has created conditions that incentivize mass deception at industrialized scales,” he quantified the issue, starkly adding, “Technology companies have covertly ‘tilted’ the playing field of our individual and collective attention, beliefs and behavior to their private commercial benefit,” and that, “naturally, these tools and capabilities tend to favor the sole pursuit of private profit far more easily and productively than any ‘dual purpose’ benefits they may also have at one time — momentarily — and occasionally had for culture or society”
Hill staffers involved in this issue advised to watch for “more aggressive” legislation emanating from “the variety of committees and subcommittees” with authority “to do something.”
Indeed. Energy and Commerce Committee Chairman Frank Pallone, Jr. (D-NJ), said in his opening statement that Congress needs to move “forward to beginning to get answers “so that we can start to provide more transparency and tools for consumers to fight misinformation and deceptive practices.”
“While computer scientists are working on technology that can help detect each of these deceptive techniques, we are in a technological arms race. As detection technology improves, so does the deceptive technology. Regulators and platforms trying to combat deception are left playing whack-a-mole,” he acknowledged.
“Unrelenting advances in these technologies and their abuse raise significant questions for all of us,” he concluded, asking, “What is the prevalence of these deceptive techniques,” and, “how are these techniques actually affecting our actions and decisions?”
But, more importantly – from a distinctly legislatively regulatory position – he posited, “What steps are companies and regulators taking to mitigate consumer fraud and misinformation?”