Tech

Fool Me Once … You Can’t Get Fooled Again

Despite public pressures, Facebook decided not to remove an altered video of House Speaker Nancy Pelosi that makes her appear drunk at a public event. In the wake of this incident, many have begun to ring the alarm bells, warning that this may just be the preview of what is to come with the development of new technologies for manipulating videos. But while these people see the Pelosi video episode as sowing the seeds of a massive new disinformation problem for online media, it may, remarkably, contain the seeds of a solution as well.

Today, many see video as an unimpeachable source, depicting the subject as it exists in the real world. Yet during the latter part of the 20th century, many also felt the same way about photographs. Some were concerned that digital photo-editing techniques would spell the end of truth, but society adapted to the technology in part due to poorly edited photos becoming the subject of public derision and in part due to body-image controversies surrounding the use of digitally edited models on the covers of magazines. Video manipulations simply present the next step in this evolution.

In much the same way, the Pelosi video can make society aware of video-editing technology. While users shared the edited video millions of times on Facebook, the media’s discrediting of the video drew even more attention. The Washington Post debunked the video with a lengthy investigative piece, which was followed swiftly by articles from various websites. The story eventually rose to the top of both the news and politics subreddits on Reddit. Cable news outlets such as CNNMSNBC and Fox News all did pieces about the story as well.

As the developments surrounding the Pelosi video suggest, poorly made fakes can spread quickly over the internet. But media coverage of their falsity can also drive further public awareness of video manipulation. As more-sophisticated fake videos are created and shared in the future, this kind of coverage will help alleviate harms.

Of course, this does not mean that policymakers should remain idle. Video-editing technology can and will be used to spread disinformation, since many people may still view videos as an authentic record of events. In other words, while society will ultimately adapt to the technology on its own, some harms will likely occur during the adaptation process.

To alleviate these harms, some suggest amending Section 230 of the Communications Decency Act to target the platforms through which the fake content is shared. Yet this remedy would encounter significant problems of its own. Many altered videos are not illegal except insofar as they constitute a civil harm against an individual. Therefore, unless a case against a given post can get past the difficult New York Times v. Sullivan standard for defamation of a public figure, the platform would not actually be liable for the content unless the amendment to Section 230 included a mandate to reach speech protected by the First Amendment –which would be unconstitutional. And even if it were not, many platforms would still be flooded with lawsuits, as they would no longer be able to use the procedural protections that Section 230 provides.

Furthermore, if platforms want to limit the number of cases they must defend under such an amendment, they will have to choose between two undesirable options: either removing all potentially objectionable material or refusing to moderate entirely. If platforms remove all content that may potentially be defamatory, less user speech will be allowed and legitimate content — as well as the users who rely on this mode of communication — will suffer. Similarly, platforms may simply make no efforts to moderate content that users post, meaning that they could not legally be construed to have knowledge of the content and therefore cannot be held liable for it. This would lead to more harmful content — not less — being shared online. Neither outcome is appealing.

So what can lawmakers do?

A simple solution would seek to minimize the time it takes society to adapt to the technology. For a variety of psychological reasons, people will believe content regardless of how authentic it appears. While lawmakers cannot change this fact, they can fund efforts to educate the public about the capabilities of video-editing technology. For example, BuzzFeed released a video that appeared to depict President Barack Obama giving a speech about deepfakes — a type of advanced video-editing technology — but the video itself was a deepfake with actor Jordan Peele impersonating the president. This type of public awareness campaign may not prevent all individuals from being fooled, but it can reinforce the idea that not everything depicted in a video is true.

While we do have challenges ahead in confronting video manipulation, we should not rush to change online content laws just because of our fears of the worst-case scenario. Society can adapt to this new technology; lawmakers should focus on facilitating that process.

Jeffrey Westling is a technology and innovation policy fellow at the R Street Institute, a free-market think tank based in Washington, D.C.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult