By Jeffrey Westling
October 23, 2018 at 5:00 am ET
When looking for a hard-hitting, fact-checking reporter, the mind doesn’t normally jump to a writer for BuzzFeed. Maybe it should.
Recently, a Facebook user and self-identified “competitive barefoot runner” wrote a post on a local community page complaining about acorns “littered” on the sidewalk. The post quickly went viral as others expressed predictable outrage, and in a rush to generate traffic, many news sources simply ran a story about the seemingly bizarre post and the outrage it generated in the community.
However, BuzzFeed took a deeper look into the claims — by calling the original poster — and broke the exclusive story: The whole thing was a fake.
“Just goes to show. You can’t fool journalists, unless they are from the Washington Post or Esquire or Gizmodo or Fast Company or Upproxx,” the author of the original post explained.
While the BuzzFeed investigation of competitive barefoot running may seem silly on its face, it is an important clue in answering how society can deal with much more serious challenges of technology-enabled disinformation — so-called “fake news.”
The technological fabrication scheme du jour is the “deep fake”: a class of simulated audiovisual materials that uses modern artificial intelligence algorithms to make animated content appear realistic — in other words, AI-generated fake videos of real people. The technology has advanced remarkably quickly of late, and because of the apparent realness of the final product, some worry that deep fakes will undermine elections or threaten national security. In other words, these commenters believe that deep fakes “will threaten to erode the trust necessary for democracy to function effectively” and are calling for legislation to stop their proliferation.
At the core of these critiques is the idea that the new technology represents an unprecedented challenge that requires an unprecedented response. But that is not entirely true.
While AI-generated fake video as a tool may be novel, society has faced similar challenges when handling deceptive media — and new technology used to create such media — in the past. And as this experience shows, society can rise to the challenge in minimizing the potential harms that new technology can present.
In the 1990s, news outlets and commentators made apocalyptic claims about the death of photography as new tools for digital editing, such as Adobe Photoshop, became commercially available. For example, a 1990 Newsweek article argued that “in the future, the Chinese or others with something to hide wouldn’t even worry about photographers” because of the distrust of photographs.
We obviously do not live in that world of distrust today. But why? Many early attempts at digitally altering images left glaring imperfections in the product and resulted in controversial news coverage for the publication, making the public acutely aware of the new technology.
The market in turn responded to the pressures, with both news agencies and photojournalists imposing codes of ethics related to the editing of photographs for publication. While people still may be deceived by a well-done Photoshop image, photography continues to persist as a trusted and respected source of information when the context surrounding the photo supports its veracity.
The deep fakes of 2018 are obviously not the Photoshop of 1990, and the world is different enough that the same chain of events is unlikely to alleviate the deep fakes problem. Yet many of the same lessons we learned from the advent of photo-editing software can be applied to this new technology, and the worries associated with deep fakes will likewise fail to materialize.
That’s where BuzzFeed comes in. New media outlets, along with sites such as PolitiFact and Snopes, already fact-check the statements of politicians and the stories of other news sources. Indeed, BuzzFeed itself made a conscious decision to move into investigative journalism — a strange choice for a site originally specializing in viral listicles — once it realized that there was a demand.
If the problems associated with the new technology are as serious as some worry, a new market for deep fake “fact-checkers” will likely open up. Obviously, a story about barefoot running and acorns littered on the sidewalk is much easier to debunk than a sophisticated, digitally altered video. However, the incentive for a company like BuzzFeed to apply the same journalistic rigor to a viral deep fake video is similar to, or perhaps even greater than, the incentive to expose a fake post on Facebook.
Clearly, simply debunking deep fake videos can’t solve the larger issue of fake news generally. But still, it highlights a key point: The market tends to adapt to new technologies and resolve what many expect to be the new challenges a given technology will present.
Furthermore, these solutions often take unexpected forms. Trying to regulate a technology at a specific point in time may foreclose the ability for innovative new services, such as a BuzzFeed fact-checker, to develop and enter the market.
So will BuzzFeed save us from deep fakes? Maybe, maybe not. But if history is any indication, the apocalyptic claims will likely fail to live up to the hype.
Jeffrey Westling is a technology and innovation policy associate at the R Street Institute, a free-market think tank based in Washington, D.C.
Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.