Facebook CEO Mark Zuckerberg faced myriad questions during his extended congressional testimonies about the Cambridge Analytica scandal, yet most of them probed at the same issue, over and over, from different angles — the tension between companies’ business motives and people’s individual concerns with protecting their personal privacy. This was regrettable not just because it stoked debate about European-style legislation that could undermine the digital economy, but also because mostly lost in the din was any consideration for the fact that Facebook data serves the broader public interest by providing a tremendously valuable resource for scientific research.
Facebook launched an official research program in 2009, giving the academic community tightly controlled access to a ballooning set of granular data about social interactions and activity. It quickly became a “holy grail” for social scientists, who have been drawing on it to publish important new findings almost daily. The question now is whether this kind of scientific research will end up being curtailed in the continuing backlash against data-sharing of any sort.
Policymakers should avoid that outcome at all costs because the benefits of using social network data in research can be substantial. Here are just a few examples that hint at the endless array of insights and discoveries at stake: Researchers at Colorado State University used Facebook data to improve traditional methods for tracking human smoke exposure from wildfires. Another study analyzed Facebook data shared voluntarily by new mothers to identify signs that a mother will develop postpartum depression. The study enabled researchers to predict whether an expecting mother is experiencing social isolation and has lower social capital, which turned out to be strong predictors of PPD. And in a third study, researchers at the universities of Ottawa, Alberta and Montpellier in France investigated how Facebook activity could indicate warning signs of mental illnesses, such as depression and anxiety, and they are now developing an AI tool that can automatically flag these warning signs for health care professionals.
Yet Facebook’s practice of sharing data for scientific research purposes risks being lumped together with commercial and political uses of data, because one academic researcher violated Facebook policies by funneling data on as many as 87 million users to Cambridge Analytica. Even if the outcry over Cambridge Analytica’s misuse of data does not lead to regulations that restrict beneficial data sharing, public outcry stoked by a media feeding frenzy risks hampering legitimate and beneficial scientific endeavors.
Earlier this month, Facebook tabled a proposal to share anonymized user data with hospitals to help inform treatment decisions. This seems to have been a pragmatic move by Facebook following some sensationalized news coverage that framed the initiative as secretive. But in reality, there is no evidence that the effort, which was only in its initial stages of development, would have put any individual’s privacy at risk, and it was no more secret than any other of Facebook’s academic partnerships.
Irrational panics about this type of research are nothing new, however. When it came to light in 2014 that Facebook had conducted an “emotional contagion” study on how exposure to positive or negative posts could influence users moods, academics feared public backlash would cause Facebook to restrict researchers’ access to its data. One Washington University researcher told Mashable that “[Some of] the biggest scientific studies ever run were Facebook studies. Now I’m kind of worried that it’s all going to fall apart because there’s this risk.”
Get the latest global tech news and analysis delivered to your inbox every morning.
As the public hears more about Facebook’s efforts to facilitate research, you can be sure that little consideration will be given to the scientific value of this data or the privacy safeguards already in place to ensure Facebook users’ anonymity is preserved; any and all data sharing will be cast as nefarious, no matter the purpose. So even if public sentiment does not translate into new legal restrictions on how a company can share data with third parties for research purposes, Facebook may very well be pressured to halt these efforts for public relations purposes.
That would be a travesty for science and the public interest. Facebook and other companies should be encouraged to make their data available to researchers in accordance with appropriate ethical and legal standards, such as safeguarding personally identifiable information, while also making sure researchers live up to those standards. That will require tight oversight on the companies’ part, and it will require research institutions to pay close attention to review board applications from academics seeking to use large data sets.
In the event data is misused, as it was in the Cambridge Analytica case, regulatory sanctions and other disciplinary responses are wholly warranted for the parties at fault. However, to hold up any one violation as justification for preventing Facebook or others from sharing data with third parties for scientific research would be akin to halting all clinical trials because one was managed inappropriately.
Joshua New is a policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology and public policy.
Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.