Facebook might downrank the most vile conspiracy theories. But it won’t take them down.

Through Social Science One, researchers can get access to Facebook data. Social Science One, launched Wednesday, is an independent research commission that will give social scientists access to previously private Facebook data. The initiative, announced back in April, is funded by outside organizations, and the research won’t be subject to Facebook’s approval. Robbie Gonzalez reported in Wired:
Starting today, researchers from around the world can apply for funding and data access that Social Science One will approve — not Facebook. If researchers want to search for something in the platform’s data that could make it look bad — or if they actually find something — Facebook won’t be able to pump the brakes.

To track opportunities and find out more:
All information will be posted at SocialScience.One, with notifications on Twitter and Facebook. Researchers can also sign up for our mailing list. Social Science One will release regular RFI’s (requests for information) and RFP’s (requests for proposals). Formal proposals will all be submitted through Social Science Research Council (SSRC). Proposals will be accepted on a rolling basis, with reviews scheduled periodically. Detailed codebooks for the available datasets to analyze will be available at SocialScience.One. Over time, we will add new types of datasets, and most existing data sets will grow as more data come in.
WhatsApp wants to enable more research about misinformation, too (but won’t give data). The Facebook-owned WhatsApp will fund research into the spread of misinformation on the platform. “The program will make unrestricted awards of up to $50,000 per research proposal,” and grantees will be invited (travel and lodging paid) to two workshops. Applications close August 12, 2018, at 11:59 pm PT. Unlike with Social Science One, however, “no WhatsApp data will be provided to award recipients.” Speaking of WhatsApp, this week it launched a feature that indicates when a message has been forwarded; the changes, the Financial Times reports, come after “a spate of lynchings in India that were alleged to have been sparked by false WhatsApp rumors.” WhatsApp is also running fake news warnings in Indian newspapers, including information about the new forwarding feature. India is WhatsApp’s largest market, with 200 million users. “A bit more effort might go a long way.” New research from Gord Pennycook and David Rand suggests that susceptibility to fake news is driven less by strong partisanship and more just by lazy/non-analytical thinking. This is maybe a good thing, if laziness is an easier problem to target (which, is it?) Anyway, the research included 3,446 participants on MTurk. Pennycook and Rand write:
Individuals who are more willing to think analytically when given a set of reasoning problems (i.e., two versions of the Cognitive Reflection Test) are less likely to erroneously think that fake news is accurate. Crucially, this was not driven by a general skepticism toward news media: More analytic individuals were, if anything, more likely to think that legitimate (“real”) news was accurate. All of the real news stories that we used — unlike the fake ones — were factually accurate and came from mainstream sources. Thus, our evidence indicates that analytic thinking helps to accurately discern the truth in the context of news headlines. More analytic individuals were also better able to discern real from fake news regardless of their political ideology, and of whether the headline was Pro-Democrat, Pro-Republican, or politically neutral; and this relationship was robust to controlling for age, gender, and education
They found some differences based on political ideology:
The overall capacity to discern real from fake news was lower among those who preferred Donald Trump over Hillary Clinton, relative to those who preferred Hillary Clinton over Donald Trump (the one exception being that in Study 2, those who preferred Trump were better at discerning Republican-consistent items)… The present results indicate that there is, in fact, a political asymmetry when it comes to the capacity to discern the truth in news media. Moreover, the association between conservatism and media truth discernment held independently of CRT performance. This may help explain why Republican-consistent fake news was apparently more common than Democrat-consistent fake news leading up to the 2016 Presidential election (Allcott & Gentzkow, 2017; Guess, Nyhan, & Reifler, 2018) and why the media ecosystem (including open web links, and both Twitter and Facebook sharing) is more polarized on the political right than on the left in the U.S. (Faris et al., 2017). Nonetheless, it remains unclear precisely why Republicans (at least in Mechanical Turk samples) are apparently worse at discerning between fake and real news.
Planning your 2018 travel? Here’s a calendar of digital disinformation–related events!
Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Leave a Reply