Where the Truth Lies – News in the Era of Social Media and “the Economy of Attention”

How can countries attempt to regulate digital communication and platforms? Tim Stuchtey and Alexander Szanto discuss the German approach in AICGS’ new report “Defending Democracy in the Cybersphere.”

“The more [people] are instructed, the less liable they are to the delusions of enthusiasm and superstition, which, among ignorant nations, frequently occasion the most dreadful disorders.”
– Adam Smith (1776)

When a series of anti-government protests spread across the Middle East in late 2010, a new phenomenon was praised as an instrument to enhance the democratic process and strengthen free and liberal societies and those who want to become one. In the words of an activist: “We use Facebook to schedule the protests, Twitter to coordinate, and YouTube to tell the world.” However, while the nature of the pro-democracy insurgency and their ultimate outcome varied widely from country to country, they had one characteristic in common: the influence of social media.

More than six years later this enthusiasm—the façade of which had already began to crumble—began to relativize and turn into the opposite. In the heated election campaign phase of the U.S. presidential elections, social media once more became an influential instrument of political and media discourse—but this time from a quite different perspective. Suddenly, there were terms that were new to many people such as Social-Bots or Fake News, but that partly also represented old phenomena in a new guise.

Nevertheless, social media platforms were not the reason for the Arab Spring, but they offered previously unimaginable opportunities to organize and coordinate protests, connect large groups, and react quickly to new developments and changing circumstances. The same applied to the 2016 elections in the United States. It would be too simplistic to claim that Trump’s electoral victory is solely based on the revolutionary use of digital media and online tools, and would not reflect the complexity of the subject. Nevertheless, it subsequently became apparent that his campaign and its strategy would have been unthinkable without digital media. The power (if any) of fake news, to put it in the words of Nathaniel Persily, Professor of Law at Stanford Law School, “is determined by the virality of the lie that it propagates, by the speed with which it is disseminated without timely contradiction, and consequently by how many people receive and believe the falsehood.”[1] Technology is neither good nor bad; it is only a catalyst of an unprecedented scale and will remain within the human sphere of influence for the foreseeable future. Its application depends on the intention and is therefore subject to a wide variety of influencing factors and interest groups.

Technology is neither good nor bad; it is only a catalyst of an unprecedented scale and will remain within the human sphere of influence for the foreseeable future.

The question that we must ask ourselves as a society and in politics is: Where is the line between just drawing different conclusions from the same facts and the spreading of false information? To what extent are we willing to reconcile free access to information and freedom of expression with fake news and propaganda that influence public opinion and radicalize young people online? How much control do we leave to industry to cope with these challenges? Do Facebook, Twitter, and other platforms serve merely as a “Digital Speakers Corner” to debate and discuss subjects of public interest, or should we treat them as an editorial board that controls content and expansion of information?

Although this role is not yet clear, recent developments have forced these companies to act. New internal rules were adopted for communication on the platforms to counter the massive criticism from the public and politics. These community policies primarily address the pressing concerns of fake news and hate speech.

The experiences of the U.S. election campaign prompted the German legislature to go one step further. After the implementation and effect of the self-obligation guidelines agreed upon with representatives of social networks did not live up to the Federal Ministry of the Interior’s expectations, a new law was introduced that aimed to prosecute fake news and hate speech with greater intensity. In an initial draft law on improved law enforcement in social networks (NetzDG), it was noted that following the experience of the U.S. election campaign, the fight against prosecutable fake news in social networks has gained high priority in the Federal Republic of Germany. The difficulties with the legal regulation of the “economy of attention” (Aufmerksamkeitsökonomie) is evident in the first paragraph of the Network Enforcement Act that came into force on January 1, 2018.

The act defines social networks as profit-oriented telemedia service providers that operate online platforms that allow users to exchange, share, or make publicly available any content with other users.[2] However, the act only applies to providers with more than two million registered users in Germany. Smaller platforms are therefore excluded. Those providers who fall under the law are obligated to take down posts and messages that include content such as sedition, insults to religious communities, defamation, criminal offences in connection with criminal and terrorist organizations, preparations of serious acts of violent subversion, treacherous acts, as well as criminal offenses concerning child pornography and depictions of violence. By allocating the obligation to take down posts into the hands of the platform providers, the NetzDG creates a situation where the providers first create the rules (algorithms) that define what we see on our timelines, while at the same are in charge of taking down potentially illegal content. The classic German Ordnungspolitik would demand that the two functions be separate and allocate them to different institutions.

The act’s first balance sheets were published in July, and the last in January 2019, with some surprises. While Facebook reported far fewer complaints to the NetzDG in the first six months (1,704 pieces of content in 886 NetzDG-complaint procedures of which 21 percent were deleted), Twitter reported 264,816 complaints, of which 28,645 (10.8%) were blocked. YouTube announced in its report that 214,827 complaints were reported, of which approximately 27.1 percent were removed in the first half of 2018. The most recent YouTube figures from the second half of 2018 present a similar picture. Of the 250,987 complaints, about 21.8 percent were removed.  The figures are below what was generally expected, but it is hard to say whether they are high or low as these are the first statistics ever compiled for this purpose.

The Federal Ministry of Justice was quite satisfied with the implementation of the act and presented a rather small number of complaints against platform operators. The Federal Office of Justice, part of the subordinate area of the Federal Ministry of Justice, is responsible for prosecuting violations of NetzDG obligations, including the handling of user complaints with regard to illegal content by network providers. The Federal Office of Justice considers the relatively low number of complaints concerning non-deleted content (704 by the end of November 2018) as an indication that platform providers take the NetzDG complaints seriously and examine them more thoroughly than in the past.

Nevertheless, while some consider the act as a positive signal, enabling legal action to be taken against hate speech and fake news, others regard the NetzDG as a threat to freedom of expression and claim that Germany in particular, as one of the leading liberal countries in the world, has missed an opportunity; they claim the act leaves too much room for maneuvering around legal loopholes. In fact, some autocratic governments introduced regulation with almost the same wording but with a very different definition of what they consider fake or extremist.

Along this fault line, a fundamental question arises that goes far beyond that of the proponents or opponents of such an act: is our current political system, still based on the Westphalian order of 1648, capable of setting the right framework for the digital society with all its multi-layered, complex, global, and fast-moving characteristics?

The internet has created great opportunities for the instant cross-border exchange of information. This tool is used by those who want to bypass state interference. It is being used to fight oppressive states and has empowered those who fight for the expression of free speech, while at the same time being used by those who want to fight the liberal order. Nation-state approaches to global phenomena have had little promise of success since long before the beginning of the internet. Nevertheless, there is growing awareness of the profound changes that have taken place in this historically relatively young phase of development as a result of influential events such as the Arab Spring, Brexit, or the 2016 U.S. election campaign, for example. This process is global and enabled by companies that understand themselves as a universal network that does not know national borders. These processes of radical transformation will not stop at the established political systems either. In this context, politics plays a reactive role and is thus often driven by incentives that deprive time and space from political power.

At a time when nationality is in vogue again, while digitization is tearing down walls, international agreements are very difficult to reach—especially in areas that are considered crucial for national security.

The political system is currently overburdened with digital policy issues for two main reasons: on the one hand, as in many other areas, there is a lack of skilled personnel who have the technical expertise to understand the way the system works and the influence it exerts. On the other hand, technological change is taking place at a pace that political decision-making processes cannot cope with. Political decision-making requires time and endurance; what is adopted today has often come a long way and may already be obsolete the moment it goes into effect. In view of this conflict situation, it may be tempting to argue that the scope for political action is extremely limited. At a time when nationality is in vogue again, while digitization is tearing down walls, international agreements are very difficult to reach—especially in areas that are considered crucial for national security. Nevertheless, some nations and communities of states have demonstrated that they can take a pioneering role and exert a considerable influence, as is the case with the NetzDG.

Algorithms decide what we see and hear; everything depends on judgments made by the guardians of the social media platforms. This power over public discourse has grown tremendously in recent years as people increasingly seek information on social networks and the role of traditional media diminishes.

The mechanisms of these algorithms must be more transparent in the future. Tech giants must be more open about how they exercise this power. Yet social platforms do not generally oppose the technical and institutional separation of application and content. On the contrary, some even welcome government regulation as their primary interest is economic growth, and they do not want to be perceived as platforms for online propaganda and a habitat for insurgents.

Technological changes are often associated with fears of the unknown; fear of the loss of power and control of those who do not want to lose their status quo.

Our democracies have repeatedly been shaken and challenged by changes in media technology. Technological changes are often associated with fears of the unknown; fear of the loss of power and control of those who do not want to lose their status quo. All developments had their influence on political debates and the discourse in civil society, whether radio and television in the twentieth century or online media and most recently social media in the twenty-first century. Nevertheless, social media platforms have a distinctly different structure than previous media technologies, and a decisive difference. An individual may distribute content on a large scale with no significant third-party filtering, fact checking, or editorial judgment. In some cases, content produced by individuals has more impact on the political debate than traditional media outlets. In this environment in which like-minded people have gathered in groups to form so-called echo chamber and filter bubbles, fake news can act as an accelerator.

The state needs to use its power and set the direction to address the present challenges, but this will only succeed in conjunction with the tech industry. A shared responsibility between policymakers and the tech industry using modern technology is desirable and can be a promising and long-term approach. Neither the companies nor their technical solutions may be left alone with this task. The driving force of fake news is not merely the ideology of users, but also the economic interests that scale advertising revenues through polarizing news to muddle the debate.

Barriers to entering the media industry have been significantly reduced as technology becomes easier to use and cheaper to maintain while becoming more complex. This factor attracts stakeholders who expect a promising business. These financial interests are evident, as the independent investigations by The Guardian and BuzzFeed have shown by identifying 100 websites run by teenagers from a small town in Macedonia who have published fake news that earned them tens of thousands of dollars. Their goal is to maximize the short-run profits from attracting clicks in an initial period and not to build a long-term reputation for quality. This financial streak can be cut and has increasingly moved into the focus.

Technology can help to filter the overwhelming amount of data as shown by the so-called eGLYPH “Re-Upload-Filter” developed by Professor Hani Farid in cooperation with the Counter Extremism Project (CEP), which can report or delete pre-classified extremist content. Machine learning and artificial intelligence will expand application areas and automate processes. Technology can help to identify and block content that was previously removed from the platform by a different user account.

Nevertheless, despite the advantages, a purely technological solution is not appropriate and desirable. Content filtering algorithms are not reliable decision-makers and should not act as judges, but rather have a supporting function. Numerous recent activities indicate that the tech industry has apparently recognized the signs of the times. On January 17, Facebook made public that it had removed several accounts, pages, and groups that had participated in coordinated inauthentic behavior on its platform and were allegedly connected to Russian instigators. The investigation also benefited from open source reporting and the work of other organizations and individuals that shared their information about these inauthentic activities with Facebook.

Technology is neither the problem nor the solution; it is only the result of human interaction—at least for now.

Eventually, perhaps the most significant aspect of this debate is education. Digital education and media competence in schools in particular is a hot topic that has been the subject of controversial debate for some time. Education should aim at empowering citizens to distinguish between trusted sources with a reviewing process or a platform where everyone can claim anything. Technology is neither the problem nor the solution; it is only the result of human interaction—at least for now.

In closing, fake news is not a new phenomenon, but social media platforms give it a stage that has never been so influential in terms of reach. In this context, only a holistic approach with shared duties and responsibility among tech companies and policymakers, but also citizens, can address these challenges. Education in the form of media literacy should become the general leitmotif of an enlightened society. Employers have a special role when it comes to investing in further education of their staff. Tech companies support educators with applications for the classroom or for self-learning.

We cast some doubt on whether it is wise to give the authority for the deletion of suspicious content into the hands of those who are also the creators of the algorithms that decide what we see on our screen. Moreover, legal initiatives such as the NetzDG—even if there is demand for improvement—in conjunction with technical solutions, civil society initiatives, and the self-regulation measures taken by platforms are a promising attempt to contain rampant illegal content. However, the question remains of how the law-making process can adapt to the speed of innovation and change we see in the world of digital communication.


[1] Nathaniel Persily, “The 2016 U.S. Election: Can Democracy Survive the Internet?“ Journal of Democracy 28 (2017): 63-76, p. 70.

[2] See cf. § 1 Par. 1 NetzDG.

The views expressed are those of the author(s) alone. They do not necessarily reflect the views of the American Institute for Contemporary German Studies.

Alexander Szanto

Brandenburg Institute for Society and Security (BIGS)

Alexander Szanto is a Cybersecurity Junior Research Fellow at the Brandenburg Institute for Society and Security (BIGS). He contributes primarily to the research project HERMENEUT, which assesses various organizations’ vulnerabilities and their corresponding at-risk assets, focusing on economic issues of cybersecurity.

Alexander studied European Studies at the University of Maastricht and as part of his studies he spent a semester abroad at the Sciences Po in Paris with a focus on International Relations. He subsequently earned a master’s degree in Intelligence and International Security, concentrating in Cybersecurity and Political Developments in the Middle East post-1945 in the War Studies Department of King’s College in London.

Prior to joining BIGS, Alexander Szanto worked in the State Parliament of North Rhine-Westphalia in Düsseldorf, where he provided research and advice on digital politics and domestic security policy.

Tim Stuchtey

Brandenburg Institute for Society and Security

Dr. Tim H. Stuchtey is the executive director of the Brandenburgisches Institut für Gesellschaft und Sicherheit (BIGS), a homeland security think-tank based in Potsdam, Germany. He is also a Non-Resident Fellow at AICGS and has served as Director of the Business & Economics Program. He works on various issues concerning economic policy, the economy of security, the classic German ‘Ordnungspolitik,’ and the economics of higher education.

Dr. Stuchtey studied economics with a major in international trade and international management and graduated in 1995 from the Westfälische Wilhelms-Universität in Münster. In 2001 he earned a Ph.D. from the Technische Universität Berlin in economics, which he obtained for his work in public finance and higher education policy. He worked as an economist for the German Employers Association and as a university administrator both at Technische and Humboldt-Universität Berlin. He was also the managing director for the Humboldt Institution on Transatlantic Issues, a Berlin-based think tank affiliated with Humboldt-Universität.

He has published a number of articles, working papers, and books on the security industry, homeland and cybersecurity issues, higher education governance and finance and on other questions of the so-called ‘Ordnungspolitik.’