Chapter 7

Keeping up with a fast-moving digital environment

Lead authors: Victor Galaz, Stockholm Resilience Centre, Stefan Daume, Stockholm Resilience Centre, Andreas Olsson, Department of Clinical Neurosciences, Karolinska Institutet, and Björn Lindström, Department of Clinical Neurosciences, Karolinska Institutet, and Hannah Metzler, Complexity Science Hub Vienna

The surge in connectivity creates new risks as it allows for the extensive spread and proliferation of misinformation. Artwork: Jirsak via Canva.

AI-supported recommender systems, social bots, and generative AI provide fertile soil for a new generation of climate mis- and disinformation. But digital media can also become a powerful tool for collaboration and innovation for sustainability, if current trends of misinformation are addressed.

Digital technologies, including social media and applications of AI, are rapidly changing the global information landscape. Digital media allows people to connect at a speed and at scales that are unprecedented in human history. This expansion offers immense opportunities for collective problem-solving and accelerated innovation for sustainability.

But the surge in connectivity also creates new risks as it allows for the extensive spread and proliferation of misinformation, false news and malicious attempts to manipulate public opinion. Digital platforms and their embedded recommender systems, automation through social bots, and a new generation of generative AI systems are fertile soil for new forms of automated climate mis- and disinformation. Scientists, the public and policy-makers must keep a close eye on these rapidly unfolding developments. The following issues are of central importance to properly analyze and respond to these risks based on best available evidence (Bail, 2022):

First, there is an urgent need to advance new methods and multidisciplinary approaches to better assess the interplay between algorithmic systems (such as recommender systems), the diffusion of online misinformation, and its impacts on opinion formation, behaviour and emotional well-being (Metzler & Garcia, 2023). For example, we need to understand how the current practice of optimizing algorithms to maximize engagement and reach on most social media platforms affects the spread of climate misinformation. A growing number of digital media users, altered social network properties, and algorithmic feedbacks are challenging issues to investigate (Wagner et al., 2021; Bak-Coleman et al., 2021; Narayanan, 2023) and require standardization efforts (van der Linden, 2023). Nonetheless, these issues are key if we want to address the root social and algorithmic mechanisms that amplify digital climate mis- and disinformation.

Second, climate misinformation does not develop in isolation from other polarized social issues. On the contrary, misinformation is largely a symptom of deeper societal problems, including increasing affective polarization between political groups or decreasing trust in democratic institutions (Osmundsen, 2021, Altay 2022). This is increasingly visible in the overlap of climate misinformation with issues like the opposition to renewable energy projects (Winter et al., 2022), conspiracy theories around geoengineering (Debnath et al., 2023), controversies around healthy diets (Garcia et al., 2019), xenophobia and false claims that link forest fires with Islamic terrorism (Daume et al., 2023) – just to mention a few. This means that the proliferation of mis- and disinformation not only unfolds across platforms (Wilson and Starbird, 2020), but also across social issues and political communities. Scholars need to expand their focus to this more complex reality (Lorenz-Spreen et al., 2023). Policy-makers and developers of digital platforms should also act proactively to respond to these clusters of mis- and disinformation, rather than treat them in isolation.

Third, access to social media APIs and thus public data for researchers is a key prerequisite for independent research. That independent researchers and journalists uncovered the Cambridge Analytica scandal at Facebook in 2016 is an example of the importance of allowing access to APIs for academics as a means to hold powerful social media companies accountable (Bruns, 2019). The dramatic recent changes in API access for researchers following Twitter’s takeover is therefore highly problematic. Limitations in API access by large social media companies are a serious obstacle for such research (Morstatter & Liu, 2017). Independent research studying the diffusion of misinformation, or dynamics of hate speech and polarization using Twitter as a use-case is at risk. Widely used and publicly available tools to help detect automated misinformation activities like Botometer could become unavailable (Politico, 2023). Emergency managers have also warned of threats to public safety during emergencies due to the erosion of the platform’s verification system and the consequential risks of increases of misinformation as fake users become verified, and public crisis management organizations lose their verified account status (Thompson, 2022).

Restricted access, in combination with the emergence of new popular digital platforms like TikTok, may very well make it impossible for misinformation research to keep up with rapid technological and social developments. Regulatory efforts are therefore required. The planned implementation of the EU Digital Service Act in 2024 is one example of legal responses that could help ensure critical future independent social media research (Politico, 2023, European Commission, 2023). Other countries should follow suit. Without secure data access for independent research, society and current ambitions to reform social media will indeed “fly blind” (Bail, 2022).

References

Altay, S. (2022). How Effective Are Interventions Against Misinformation https://doi.org/10.31234/osf.io/sm3vk

Bail, C. (2022). Social-media reform is flying blind. Nature, 603(7903), 766–766. https://doi.org/10.1038/D41586-022-00805-0

Bak-Coleman, J. B., Alfano, M., Barfuss, et al. (2021). Stewardship of global collective behavior. Proceedings of the National Academy of Sciences, 118(27), e2025764118. https://doi.org/10.1073/pnas.2025764118

Daume, S., Galaz, V., & Bjersér, P. (2023). Automated Framing of Climate Change? The Role of Social Bots in the Twitter Climate Change Discourse During the 2019/2020 Australia Bushfires. Social Media+ Society, 9(2), 20563051231168370.

Debnath, R., Reiner, D. M., Sovacool, B. K., Müller-Hansen, F., Repke, T., Alvarez, R. M., & Fitzgerald, S. D. (2023). Conspiracy spillovers and geoengineering. iScience, 26(3).

European Commission (2023) Delegated Regulation on data access provided for in the Digital Services Act. Retrieved May 17, 2023 from European Commission website: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13817-Delegated-Regulation-on-data-access-provided-for-in-the-Digital-Services-Act_en

Garcia, D., Galaz, V., & Daume, S. (2019). EATLancet vs yes2meat: the digital backlash to the planetary health diet. The Lancet, 394(10215), 2153-2154.

Lorenz-Spreen, P., Oswald, L., Lewandowsky, S., & Hertwig, R. (2023). A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature human behaviour, 7(1), 74-101.

Metzler, H. & Garcia, D. (2023). Social drivers and algorithmic mechanisms on digital media. PsyArXiv. https://doi.org/10.31234/osf.io/cxa9u

Morstatter, F., & Liu, H. (2017). Discovering, assessing, and mitigating data bias in social media, Online Social Networks and Media 1, 1-13, https://doi.org/10.1016/j.osnem.2017.01.001.

Narayanan, A. (2023). Understanding social media recommendation algorithms. The Knight First Amendment Institute. Retrieved April 19, 2023, from https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms

Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Science Review, 115(3), 999-1015.

Politico (2023) Twitter’s plan to charge researchers for data access puts it in EU crosshairs. Retrieved May 17, 2023, from Politico website: https://www.politico.eu/article/twitter-elon-musk-plan-to-charge-researchers-for-data-access-puts-it-in-eu-crosshairs/

Thompson, A. (2022). Twitter Chaos Endangers Public Safety, Emergency Managers Warn. Retrieved May 5, 2023, from Scientific American website: https://www.scientificamerican.com/article/twitter-chaos-endangers-public-safety-emergency-managers-warn/

van der Linden, S. (2023). We need a gold standard for randomised control trials studying misinformation and vaccine hesitancy on social media. BMJ 381, 1007.

Wagner, C., Strohmaier, M., Olteanu, A., Kıcıman, E., Contractor, N., & Eliassi-Rad, T. (2021). Measuring algorithmically infused societies. Nature, 595(7866), 197–204. https://doi.org/10.1038/s41586-021-03666-1

Wilson, T., & Starbird, K. (2020). Cross-platform disinformation campaigns: lessons learned and next steps. Harvard Kennedy School Misinformation Review, 1(1).

Winter, K., Hornsey, M. J., Pummerer, L., & Sassenberg, K. (2022). Anticipating and defusing the role of conspiracy beliefs in shaping opposition to wind farms. Nature Energy, 1-8.

Published: 2023-05-25
About the author

Victor Galaz is an associate professor in political science at Stockholm Resilience Centre at Stockholm University. He is also programme director of the Beijer Institute’s Governance, Technology and Complexity programme. His research includes, among others, societal challenges created by technological change.

He is currently working on the book “Dark Machines” (for Routledge) about the impacts of artificial intelligence, digitalization and automation for the Biosphere.

Stefan Daume is a post-doctoral researcher at Stockholm Resilience Centre at Stockholm University. His research explores connections between digital technologies and sustainability, with particular focus on the promises and risks of social media for public engagement with environmental challenges.

Andreas Olsson is a professor at the Psychology Division of the Department of Clinical Neurosciences at Karolinska Institutet. He leads the Emotion Lab at the institute, aiming to understand how emotional information is transmitted, learned and regulated in social situations.

Björn Lindström is a senior research specialist at the Psychology Division of the Department of Clinical Neurosciences at Karolinska Institutet. He is also a member of the Emotion Lab.

Hannah Metzler is a postdoc at the Complexity Science Hub Vienna. Her main research interest lies in the role of emotions and social processes in communication.

In her current research, she applies methods like text analysis and machine learning to capture emotions at the collective level, and investigate their effects on misinformation spreading on social media.

Key terms

Are you unsure what a specific term means? Check our list of key terms.

Explore all chapters

Share

News & events