How could deepfakes impact the 2020 U.S. elections?

This post is by Nicholas Diakopoulos and Deborah Johnson from Nieman Lab

Click here to view on the original site: Original Post

New technologies used to produce deepfakes are rapidly advancing and becoming more accessible, allowing users to make compelling video and audio clips of individuals doing and saying things they never did or said. Users can, for instance, synthesize an individual’s voice, swap one person’s face onto another person’s body in a video, or alter a video interviewee’s words merely by re-writing a transcript. Recorded audio-visual media is becoming more and more malleable, facilitating an ease of editing almost analogous to text. The technology offers a host of potential benefits in entertainment and education, from multi-lingual advertising campaigns to museums bringing dead artists back to life. But it can also challenge aural and visual authenticity and enable the production of disinformation by bad actors. Deepfakes have the potential to wreak havoc in contexts such as news, where audio and video are treated as a form of evidence that something
happened. So-called “cheapfakes,” such as the widely circulated clip of House Speaker Nancy Pelosi, have already demonstrated the potential for low-tech manipulated video to find a ready audience. The more advanced technology creates a whole new level of speed, scale, and potential for personalization of such disinformation. The goal of this article is to stimulate reflection on the ethics and governance of these emerging technologies. Specifically, we’re focused on the use of these technologies in the context of the 2020 U.S. election and seek to encourage debate about potential responses by various stakeholders. What should social media platforms, journalists, technology developers, and policymakers do to ensure that the outcomes of democratic processes aren’t negatively impacted by deepfakes? To do this, we have developed a set of scenarios that describe an array of possible uses (arguably, misuses or unethical uses) of deepfake technology in the 2020 elections. These speculative fictions explore how current state-of-the-art technology could be deployed by actors with various motivations to impact election outcomes. The scenarios describe a rich and complex constellation of how the technology might interact with human behavior. The situations generate a number of ethical issues and point to dimensions of elections in which norms, policy, regulation, or technical intervention might be needed or helpful to protect the integrity of the 2020 election. The set of scenarios purposely includes a variety of different actors (candidates or campaign staffers, external entities like PACs or foreign governments), motivations for those actors (to support a candidate, to hurt a competitor, to undermine the process), modalities of media (audio, video, image), phases (early vs. late), channels for distribution (social media, podcasts, chat apps), and mechanisms for influencing voters (discrediting a candidate’s reputation by association, exaggerating a candidate’s views, suggesting a candidate engaged in corruption, providing evidence of a candidate’s hypocrisy, inciting a campaign’s base, intimidating voters, undermining or attacking the election process, and more). From an ethical perspective, all of the situations described in the scenarios are problematic insofar as they involve deception. However, they vary in the actors that produce and/or distribute the deepfake, the kind of damage they attempt to do, and how the deception can be counteracted. One overarching question is how these variations affect the ethical nature of the situation. We developed the scenarios with an eye to making them plausible — describing what you might reasonably believe could happen — rather than merely possible. The challenge for you, then, is to consider what might make the plausible less probable. What can be done now — in the way of establishing norms, rules, policies, etc. — to avoid the worst outcomes (or at least make them less likely)? The scenarios and brief reflections on each are below. We’d love to get your feedback as to whether we have achieved our goals.