How did the way debunking headlines were worded change readers’ responses? What about the use of visuals? Importantly, did having multiple newsrooms — and multiple logos appended to each fact check — actually increase readers’ trust in the information? Or did it feed into the narrative of an out-of-touch media elite, huddling together? (“We did get people who asked, ‘Who’s behind you?’ ‘Who’s funding you?’” [First Draft’s Claire] Wardle said.)The new report — which was done by independent researchers commissioned by Cross Check — found overall that “CrossCheck appears to have gained the trust of a large and politically diverse audience…The fact that the project included local outlets appears to have been one of the reasons why the project reached people across the political spectrum. The perceived impartiality of the project was also one of the reasons that it appealed to a wide audience.” The report includes interviews with audience members and the journalists who worked on the project themselves, and also notes three “future considerations”:
1. Undertaking additional research on effective debunks using images and videos. As the project evolved, changes took place to the original processes. For example, it became clear that including screenshots as the ‘hero’ image on the posts (which then got automatically dragged into social media posts) meant that CrossCheck was perpetuating the original piece of fabricated content. [Agence France-Presse] therefore designed a graphic template which allowed editors to use these alongside any image that referenced the fabricated content…The impact of this needs to be researched in greater detail. In addition, towards the end of the project, CrossCheck editors started making short explainer videos for Facebook. The metrics immediately showed that they were being shared widely but more research needs to be undertaken about the most effective ways of creating video based debunks and fact-checks. 2. Understanding the “tipping point.” Reporting on disinformation requires different considerations, and the threat of giving oxygen to rumors, means that newsrooms will need to give additional thought to when and how to report on these types of stories. During CrossCheck, decisions were taken collectively. More analysis needs to be undertaken about where this tipping point sits, and what metrics journalists should be looking at before they decide whether and how to publish a story on a particularly rumor or piece of fabricated content. 3. Understanding the importance of cultural and time-bound contexts for collaborative projects. It is very likely that CrossCheck would never have got off the ground if First Draft had had a longer lead time (which would have given senior editors more time to say “no”) or if there hadn’t just been the active conversations about disinformation and its impact on the US presidential election. While the results of this research have been very positive, attempts to run similar projects around the UK and German elections have been less successful at getting newsrooms to collaborate. It’s important we understand why CrossCheck worked in the French context.“An EU-level strategy on how to tackle the spreading of fake news.” The European Commission launched “a public consultation on fake news and online disinformation and set up a High-Level Expert Group representing academics, online platforms, news media and civil society organizations.” The public can weigh in here, through February 23, 2018; there’s one questionnaire for citizens and one for “legal entities and journalists reflecting their professional experience of fake news and online disinformation.” “A leader, considering or warned of a nuclear attack, is unlikely to be checking Twitter notifications while being rushed into a bunker. There is no time for that.” A terrifyingly hilarious (hilariously terrifying?) memo (entitled “Three Tweets to Midnight: Nuclear Crisis Stability and the Information Ecosystem“) from think tank The Stanley Foundation looks at “facets of the modern information ecosystem and how they might affect decision-making involving the use of nuclear weapons, based on insights from a multidisciplinary roundtable.” The memo, which somehow manages to avoid ever mentioning Trump by name, acknowledges that “because the impact of social media on international crisis stability is recent, there are few cases from which to draw conclusions.” Instead, you’ll be comforted to know that there are “more questions than answers” — among the questions:
— To what degree does the information ecosystem make it easier for a leader to use bad information, disinformation, or questionable alternative information sources to shape or buttress his or her preferred decision?“They are basically buying good PR by paying us.” The third-party factcheckers working with Facebook are frustrated, reports Sam Levin for The Guardian.
— How do leaders factor messages on social media into perceptions of adversary signals? What messages on social media, and in which contexts, might be effective at signaling? How does the proliferation of message channels affect signal consistency?
— How might online belittling and humiliation affect the emotional state of a decision-maker in a crisis?
— How might the information ecosystem change the likelihood that a leader gets caught in a commitment trap or is able to escape one?
— How and to what extent, if any, could an online public opinion firestorm calling for war from a leader’s political base predispose him or her to escalate a crisis or use nuclear weapons first?
— How might a leader instigate such an online firestorm? How could an adversary, or third party, spark such a firestorm through disinformation?
“We’re sort of in the dark. We don’t know what is actually happening,” said Alexios Mantzarlis, director of the International Fact-Checking Network at Poynter, which verifies Facebook’s third-party factcheckers. He said he appreciated that there “are a lot of people at Facebook who really care about this” but, he added, “the level of information that is being handed out is entirely insufficient…This is potentially the largest real-life experiment in countering misinformation in history. We could have been having an enormous amount of information and data.”
Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.