A Democratic Response to Digital Disinformation: The Role of Civil Society
Alliance for Securing Democracy
Bret Schafer is a social media analyst and communications officer at the German Marshall Fund’s Alliance for Securing Democracy. He has a Master’s in Public Diplomacy from the University of Southern California, and a BS in Communications with a major in radio/television/film from Northwestern University. As an expert in computational propaganda, he has appeared in the New York Times, Business Week, the Wall Street Journal, and the Los Angeles Times, and he has regularly been a guest on NPR and BBC radio. Prior to joining ASD, Bret spent more than ten years in the film industry, including stints as a development assistant at the Cartoon Network, a development producer at Citizen Skull Productions, and a freelance writer at Warner Brothers. He has also worked in Budapest as a radio host, and in Berlin as a semi-professional baseball player in Germany’s Bundesliga. He is the former editor-in-chief of Public Diplomacy Magazine, and his work has been published in the Chronicle of Social Change, LAist.com, and the Cipher Brief, among others. His regional interests are Russia and Central/Eastern Europe, and he previously interned in the Public Affairs Section at the U.S. Embassy in Moscow, Russia.
Numerous factors complicate efforts to combat digital disinformation, not the least of which is the near impossibility of establishing a universal set of standards that could define what is and is not “disinformation.” This taxonomic dilemma is amplified by different cultural and legal standards related to freedom of speech. Protected free speech in the United States, for example, is vastly different from freedom of expression in Germany. Unlike terrorist content or child pornography, both of which plainly and egregiously violate societal norms and, in some cases, federal and international laws, digital disinformation falls into a difficult-to-codify gray zone. Democratic governments and social media platforms are loath to regulate this space, for the justifiable and perhaps laudable fear of being seen as “arbiters of truth”— a role that is anathematic to free, open exchanges of information.
Of course, authoritarian and autocratic regimes face no such misgivings in their efforts to regulate content online. This creates an uneven playing field where the rules and norms that apply to democratic players are simultaneously ignored and exploited by undemocratic ones. Because malign foreign actors often mimic the vitriolic and polarizing messages and themes championed by certain domestic groups, it is enormously challenging to disaggregate protected speech from foreign influence operation. This is especially true in the United States, where the First Amendment provides broad cover for those who seek to hijack and manipulate public discourse. Any effort to coordinate a response to digital disinformation, whether offensive or defensive in nature, must therefore recognize that democratic societies, at least in the short term, are fundamentally more vulnerable to information operations than authoritarian ones.
Because malign foreign actors often mimic the vitriolic and polarizing messages and themes championed by certain domestic groups, it is enormously challenging to disaggregate protected speech from foreign influence operation.
While it is important to understand this imbalance, it is equally important to resist the urge to undermine freedom of speech or expression in the name of national security. Doing so would not only weaken democracy, it would validate the repressive tactics authoritarians use at home. It is therefore essential that policymakers dogmatically adhere to our values in order to avoid unintended negative externalities in the search for a “solution” to computational propaganda. This is not just a matter of principle but of strategic necessity: we cannot allow our very real need to protect the credibility of information to erode the very values that foreign influence operations seek to destroy. Put simply, we must not become our adversaries to defeat them.
Instead, governments in free societies should work within the parameters of free speech and expression to be build resiliency and create deterrents. This includes enacting sensible legislation where necessary and creating entities that can identify and respond to emerging digital threats. But unlike autocratic and authoritarian regimes that must rely on top-down solutions, democracies have the benefit of being able to employ a grassroots approach to the problem. Solutions need not come from capitals, nor do they need to be driven by heavy-handed regulation. Tech and social media companies certainly have an outsized role to play, but civil society actors in the United States and Europe can draw upon a wealth of knowledge and expertise to mitigate vulnerabilities and strengthen resolve. Independent and credible fact checkers are key, but so are technologists, educators, digital forensic analysts, and strategic communications professionals, to name but a few. Digital disinformation is not just a technological or informational problem; it is a whole-of-society problem. Therefore, we must find whole-of-society solutions.
The Role of Civil Society
Civil society can play four primary roles in the fight against computational propaganda. First, it can act as a watchdog, policing social media and exposing disinformation campaigns as they emerge. Second, it can help to inoculate publics against information manipulation by supporting education outreach and media literacy programs. Third, it can apply pressure to tech companies, businesses, and advertisers that wittingly or unwittingly host, support, or incentivize creators of false and misleading content. Finally, civil society can work with governments, the media, and each other to improve the conditions of mistrust and polarization that create fertile breeding grounds for the spread of disinformation. In all cases, these efforts can and should expand beyond domestic borders and include like-minded groups throughout the transatlantic space. European and American democracies are bound together by common values that supersede any legal or cultural differences. Finding a unified voice and drawing upon each other’s experiences and best practices is essential, not only in the fight against digital disinformation, but also in the broader context of rebuffing authoritarian threats to democracy.
Civil Societies’ Role in Monitoring, Countering, and Exposing Disinformation
The first line of defense against digital disinformation is to expose and refute efforts to manipulate information. This involves proactive measures to raise awareness of the tactics and techniques used to place and propagate disinformation as well as reactive measures to analyze, verify, and, if necessary, debunk specific narratives. Fact-checkers are often viewed as the tip of this spear, but in reality, they are the rear guard whose work is to clean up the historical record for posterity’s sake. As countless studies have shown, if a false narrative enters the public’s bloodstream, it is nearly impossible to reverse the deleterious effects. Therefore, we need groups out front who can identify structural weaknesses in the online information ecosystem before adversarial actors exploit them. This is a fundamental difference between traditional and computational propaganda. The former involves the manipulation of information and false narratives; the latter involves the manipulation of algorithms that can spread false narratives at an unprecedented scale and speed. The response, therefore, is not just about objective truth, but also about identifying cyber vulnerabilities in the information space.
Governments can and should play a role in these efforts. The United States, NATO, and many European countries have established task forces that monitor and track disinformation campaigns, including NATO’s StratCom Center of Excellence and the U.S. State Department’s Global Engagement Center. These efforts are critical, particularly in instances when widespread disinformation campaigns threaten public health and safety or national security. Often, however, information operations do not reach the threshold of triggering a government response.
Additionally, in the United States, there is no single agency tasked with alerting the public to active or developing disinformation campaigns, whether online or off. The agencies most likely to spot emerging campaigns originating from abroad—the NSA, CIA, and the State Department—are either ill equipped or expressly prohibited from handling domestic outreach, leaving a gap that, at least at the moment, must be filled by civil society.
Credibility is also key. While certain European governments—most notably, those in Scandinavia and the Baltics—have proven to be adept at exposing and communicating online threats to their societies, many governments suffer from profound credibility gaps. The specter of political motivations will always haunt government efforts to unmask foreign influence operations, casting doubt on the viability of government-driven fact-checking efforts. Although civil society groups are certainly not immune to real or perceived biases, they are in a better position to independently verify information, particularly if they can prove their nonpartisan or bipartisan credentials. The Poynter Center’s International Fact-Checking Network and the Kyiv Mohyla Journalism School’s StopFake.org are two initiatives that have proven effective at exposing disinformation in general, and, in the case of StopFake, Kremlin-generated disinformation in particular.
The specter of political motivations will always haunt government efforts to unmask foreign influence operations, casting doubt on the viability of government-driven fact-checking efforts.
Projects like the Alliance for Securing Democracy’s Hamilton 68 dashboard, the Atlantic Council’s Digital Forensic Research Lab, and the tech-savvy volunteer collective Data for Democracy have also exposed the computational tools used to amplify false narratives on social media. By identifying inorganic nodes in social media networks and raising awareness of malicious automation and systemic vulnerabilities, these groups have moved the conversation away from one that focuses exclusively on narrative solutions to one that addresses broader cyber vulnerabilities. Alone, these initiatives are merely a ripple in the proverbial pond, but combined with research from the academic community, including Indiana University’s Center for Complex Networks and Systems Research (cNetS), Harvard University’s Belfer Center, and Columbia University’s Tow Center for Digital Journalism, these efforts have slowly begun to change the strategic paradigm.
Moving forward, improving coordination mechanisms between these efforts will be critical to avoid redundancies and elevate each other’s work. The Atlantic Council’s Disinformation Portal is a good initial step. So, too, was the AGI / HSS Transatlantic Cybersecurity Partnership, which brought together American and German academics, civil society groups, government officials, and business leaders to exchange ideas and best practices.
Building Resilience through Education
Despite the best efforts of the fact-checkers and troll hunters, efforts to counter disinformation can only do so much. The problem is simply too vast and the tools too varied. It is thus essential that civil society work to raise awareness of the threat with the public, and to advance programs that can educate citizens so that they have the tools to protect themselves. This means that groups engaged in disinformation research must break out of the bubble of capital cities and engage publics at the local level, especially in disaffected communities that are often targeted by malign influence operations.
Media literacy is one solution, but it is not a silver bullet. This is especially true with efforts to reach older generations, who may have the necessary critical thinking skills but lack familiarity with digital concepts like filter bubbles, fake online personae, or malicious automation. Traditional education outreach through schools will obviously miss this portion of the population. Regardless of the limitations, however, there is a clear need for local civil society groups to train educators and students about how to detect information operations online, and how to be responsible and critical consumers of news. This is especially true in the United States, where the fragmented education system makes any state-driven effort nearly impossible to implement. While Europe has more buy-in at the state level and a more centralized approach to education, there is still a need there for NGOs to partner with governments to develop comprehensive media literacy programs.
In addition, media literacy cannot exist in a vacuum. It must be coupled with civics education and efforts to improve civic participation. The foundation of most conspiracy theories is a distrust of government and a sense of removal from the political process. Efforts to explain how democratic governments function and how citizens can be more engaged in the democratic process will shatter many disinformation narratives.
Finally, civil society should work to support local and independent media. This not only involves direct support for journalists working in underserved communities or covering under-covered topics, but also efforts to inform journalists about how to protect themselves from malign foreign influence. This includes pushing for standards in how the journalistic community responds to leaks of hacked information, as well as best practices for verifying social media accounts are legitimate before using them as sources in a story.
Applying Pressure When and Where Needed
Citizens in democratic countries have the power to demand that elected officials take the threat of digital disinformation seriously. As consumers, they also have the ability to apply pressure to the platforms and services that have facilitated the spread of disinformation. Civil society groups can fight this battle on multiple fronts, from direct engagement with companies to public name-and-shame campaigns. If needed, they also have the power to organize boycotts and to pressure advertisers to end relationships with companies that wittingly or unwittingly facilitate the spread of disinformation.
It is important to recognize that many creators and distributors of computational propaganda have non-ideological motivations. From Macedonian fake news factories to celebrity follower factories and corporate trolls-for-hire, there is an entire online economy devoted to the manipulation of information.  While many of these for-profit services are used for relatively benign purposes (for example, the posting of inflated product reviews on Amazon or the artificial amplification of views on YouTube), almost all can be abused in more malicious ways. The significant overlap between profit-driven and ideological manipulators of information thus requires that civil society groups target the entire digital disinformation ecosystem, not just the tentacles that connect directly back to malign state or non-state actors.
Civil society would be wise to focus its efforts on exposing and degrading for-profit disinformation efforts rather than those operated by ideological extremists or hostile state governments.
In fact, civil society would be wise to focus its efforts on exposing and degrading for-profit disinformation efforts rather than those operated by ideological extremists or hostile state governments. On the surface, this approach may seem counter-intuitive; after all, why go after the small arms dealers rather than the armies? But degrading the profitability of these commercial disinformation ventures would shrink the community of bad actors online, leaving only the “true believers” and those directly financed by authoritarian governments or extremist groups. This not only would make efforts to expose misleading content more manageable, but it would also limit the ancillary tools and services available to those engaged in large-scale information operations.
A critical cog in the for-profit disinformation wheel is online advertising. Often, however, companies are completely unaware that their brands are appearing on questionable sites. Because major brands enlist third party ad tech companies to place their ads online, the decision to place ads on specific sites is made by an algorithm rather than by an image-conscience brand director. This differs from television, radio, and print ad buys, where advertisers are acutely aware of the content that is associated with their brands. For obvious reasons, reputable companies do not want their brands associated with sites that peddle hyper-partisan or factually questionable content. Drawing attention to instances when ads for reputable companies appear on less-than-reputable sites is an effective tool in applying pressure up the food chain. The potential loss of a significant revenue stream often carries more weight than the threat of legislation.
Addressing the Root Causes
Disinformation is only effective if the target audience is receptive. Influence is not mind control: it is a nudge or a shove, usually in the direction someone is already predisposed to lean. No amount of disinformation can change hardened views, but a targeted campaign can push a targeted population—whether on the far left or the far right—to an even more radicalized position. It can inspire people to action, but it can also drive people to inaction. In a democracy, both results are highly problematic.
Civil society must therefore work to address people’s core grievances with democracy and the liberal international order. The wave of populism that has swept across Europe and the United States did not result from digital disinformation; it resulted from very legitimate concerns. Those engaged in the fight against computational propaganda would be wise to keep those concerns in mind, and to avoid rhetoric that risks further alienating certain populations. Ultimately, the best defense against digital disinformation is to address the real-world issues that disinformation seeks to exploit.
 Elizabeth Kolbert, “Why Facts Don’t Change Our Minds,” The New Yorker, 27 February 2017.
 Renee Diresta and Jonathon Morgan, “Information Operations are a Cybersecurity Problem: Toward a New Strategic Paradigm to Combat Disinformation,” Just Security, 10 July 2018.
 Christian Caryl, “If You Want to See Russian Information Warfare at its Worst, Visit these Countries,” The Washington Post, 5 April 2017. Online.
 See https://www.poynter.org/channels/fact-checking and https://www.stopfake.org/.
 See http://dashboard.securingdemocracy.org/, https://medium.com/dfrlab, and http://datafordemocracy.org/.
 See http://cnets.indiana.edu/, http://datafordemocracy.org/, and https://towcenter.org/.
 See https://disinfoportal.org/.
 See /project/transatlantic-cybersecurity-partnership/.
 Jamie Fly, et al., “Policy Blueprint for Countering Authoritarian Interference in Democracies,” Alliance for Securing Democracy, 26 June 2018, pp. 34-35. Online.
 Heidi Tworek, “Responsible Reporting in an Age of Irresponsible Information,” Alliance for Securing Democracy, 23 March 2018. Online.
 Samantha Subramanian, “Welcome to Veles, Macedonia: Fake News Factory to the World,” Wired, 15 February 2017. Online; Nicholas Confessore, et al., “The Follower Factory,” The New York Times, 27 January 2018. Online; and Elizabeth Dwoskin and Craig Timberg, “How Merchants use Facebook to Flood Amazon with Fake Reviews,” The Washington Post, 23 April 2018. Online.
 Bret Schafer and Andrew Weisburd, “Insinuation and Influence: How the Kremlin Targets Americans Online,” Alliance for Securing Democracy, 16 October 2016. Online.