Improving sustainability science communication

"Talking to AI 2.0" by Yutong Liu & Kingston School of Art via betterimagesofai.org. (CC-BY 4.0).
Communicating climate and sustainability sciences has proven challenging, especially in strongly polarized contexts. Can AI support new ways of science communication, storytelling, and scenario visioning?
Introduction
Our collective abilities to act on climate change and sustainability challenges require up-to-date information. For example, governments trying to set targets and design conservation policies need to base such decisions on the best available evidence. Local farming communities preparing for a changing climate need to be able to assess future risks to their crops and livestock, receive evidence-based guidance, and collaborate with their members. Engaging with young people on climate issues requires styles of communication that are perceived as both attractive and actionable.
Climate and sustainability-related mis- and disinformation pose serious threats to information integrity all over the world. Such risks may very well become amplified by the increased use of generative artificial intelligence (generative AI) through, for example, personalized false messages, an increasing volume of false news online, or the proliferation of chatbots that distort scientific facts.1 There is also increased evidence that the combination of expanding digital social networks and the infusion of AI in digital media platforms is influencing human emotions at scale (see Theme box 4, Climate Emotions and AI).
Uses of AI can, however, also help support science communication in new ways.2 Generative AI, such as ChatGPT and multimodal language models (multimodal LLMs)—that is, models that combine text, images, data, and even sound— offers a novel opportunity to enhance science communication for sustainability. Traditional science communication methods often struggle to reach diverse audiences in a timely and engaging manner. In contrast, AI-powered tools enable personalized, interactive, and multilingual communication that adapts to users’ needs, local contexts, and decision-making scenarios in real time.3 By transforming complex sustainability science into accessible narratives, visualizations, and dialogues, AI can, at best, help empower both citizens and policymakers.
What Makes Science Communication Effective?
Truly effective science communication is not only about sharing facts from scientists to non-scientists, but also about making scientific knowledge reachable, understandable, reliable, and actionable. This suggests the importance of recognizing the diversity of the audience, what they care about, what they already know, and how they prefer to receive information (see Moser et al., 20104 for more details).
The effectiveness of science communication differs largely from different stakeholders, educational backgrounds, ages, and beliefs. Using clear language, relatable examples, and practical stories helps people engage with complex issues such as climate change and biodiversity loss. Ideally, science communication should also allow scientists to listen and respond to the audience. People are more likely to trust and act on scientific information when they feel respected and included. Unfortunately, scientists are in general not trained to develop such skills, and often lack the organizational incentives to engage in such forms of communication (see Markowitz and Guckian, 20185 for details).
Engaging with Scientific Knowledge Through AI
One of the observed benefits of large language models (LLMs) is their ability to rephrase text in ways that suit the profile of the user. For example, an LLM can explain complex things in plain language (ELI5: Explain Like I’m Five6). It can translate academic jargon and explanations into more accessible, easy-to-understand explanations.3,7 Small and purpose-built language models can also be developed in ways to support grassroot communication and education, as proven by several African-led initiatives.8
A number of studies show the potential of using LLMs to expand education opportunities in areas of the world where teaching resources are limited, potentially helping bridge educational inequalities,9 although they may also foster educational inequality and digital divide unless such risks are mitigated proactively.10,11 Multimodal LLMs (or multimodal AI) can help explain issues in engaging and understandable ways. Google’s NotebookLM, for example, creates a podcast based on uploaded documents. Uses of multimodal AI can act as helpful “co-communicators,” providing personalized answers, visuals, and summaries, all while adapting to local context and real-time information. Multimodal AI can, at least in principle, be developed to show how climate change might affect a specific town. It can also help citizens understand scientific reports through interactive maps and stories, thus allowing people to understand (and maybe even enjoy) complex scientific materials.
Table 1. Examples of how LLMs and generative AI can enhance science communication.
LLMs can also be used in multi-agent system architectures that mimic scientific discussions, such as a roundtable or panel, with several agents with different perspectives communicating iteratively.12 In the context of sustainability and climate change issues, “virtual stakeholders”—each of them based on one specific type of text corpus, say IPCC reports or FAO reports— can bring different stakes and perspectives to answer questions. A moderator can then summarize the different viewpoints and identify commonalities and differences in expert opinions.13
Table 1 lists a number of idealized possible applications of LLMs and generative AI on various science communication aspects.
AI for Storytelling and Futures
Much of the science that is being published today indicates a rapidly changing world and the complex developments that lead to many potential futures. The fields of futures studies, anticipatory governance and scenario planning, and the applied fields of strategic foresight and speculative design use a variety of methods and tools to try to make sense of what might be emerging.23–25 More importantly, these fields also connect what might happen in the future to decisions that are being made today.
These more formal methods and fields of study often incorporate aspects of art and literary forms with a specific focus on the future, such as science fiction, which at its heart is concerned with reflecting on the human condition and our relationship with new forms of technology. In recent years, there has been an increasing focus on applying futures methods and developing scenarios specifically in the context of sustainability.26–29
Many of these future scenarios, especially when applying narrative futuring methods or working at the intersection of art and science, can themselves be powerful science communication tools that can bring a number of different types of scientific findings and driving forces together and present them in compelling ways.30 Science has many stories to tell, and scientists may need extra support to find the story in the data.
How Can AI Tools Support Foresight and Futures Work Connected to Sustainability
There are several different applications of AI to support horizon scanning, foresight, futures, and science-based storytelling work in the context of improving science communication.
AI tools can help with signal detection and horizon scanning in the early phases of finding the anchor points for scenarios. For example, in Lübker et al.,31 the authors used a specific type of ML algorithm to scan thousands of scientific papers on ocean governance in the high seas in order to find thematic clusters, which could then be the different scientific anchor points for the eventual scenarios. The scenarios were as a last step written up as short science-fiction stories. A number of quite surprising or unexpected connections were made, which could then be incorporated into the stories and help the authors to find new interesting angles31 (see also Carvalho, 202432).
As always with any application of AI, it is necessary to actively engage with the outputs of the work, test the output, and validate the results against the sources. Specifically in the context of science, recent work by Messeri and Crockett33 makes the case that different AI solutions as applied to science communications can exploit cognitive shortcomings, thereby making us vulnerable to “illusions of understanding” whereby we believe we understand more about the world than we actually do. AI then has the potential to supercharge the Dunning-Kruger effect—when lack of knowledge and skill in a certain area causes people to overestimate their own competence.
Limitations and key challenges
LLMs have a well-explored capacity to quickly summarize, tailor, and communicate climate and environmental information.3,34 However, their potential to hallucinate—that is, create inaccurate or misleading information, for instance false facts or citations—remains a concern.35 LLMs can refer to or cite nonexistent material, like news and scientific articles; produce factually incorrect information; introduce subtle inaccuracies; and oversimplify responses. As discussed earlier, hallucinations are not only the result of the technical features of LLMs, but also evolve as the result of how humans interact with the LLMs through prompting and feedback.36 The absence of industry standards, along with a lack of agreed-upon and regulated best practices, presents severe challenges in mitigating hallucination risks.37 These limitations should be taken seriously and serve as reminders that uses of AI require careful, and time-demanding, curation and oversight.
As for any other area discussed in this report, AI cannot be viewed as a panacea to deeper science communication issues—many of which are rooted in broader social issues such as unequal access to education, feelings of exclusion, polarization, identity politics, and more.38–40
Climate Emotions and AI
Author: Victor Galaz
Artificial intelligence (AI) is often viewed as a method to help support rational decision-making across a whole range of sustainability and climate issues. However, various uses of AI also shape our emotions—another key aspect of human decision-making.
Emotions such as fear, joy, anger, and empathy play a crucial role in shaping human perceptions and responses to crises, including climate change. Fear of extreme weather events, anger at political inaction, and hope for a greener future all influence how individuals and societies respond to environmental challenges. AI and associated technologies may shape climate emotions in several ways, thus influencing human behavior at scale. Social media platforms, for example, powered by AI-driven recommender systems, curate content based on emotional engagement, often prioritizing emotionally charged posts—whether hopeful, outraged, or fearful. Generative AI can be used to craft highly persuasive digital content, further influencing public opinion and activism in complex ways.
The individual and social impact of the spread of climate emotions is far from linear and predictable. Instead, it’s the result of interactions between technology, social relations, and psychological mechanisms. On the one hand, AI-powered social networks can help mobilize climate action. Online-supported movements like Fridays for Future gained traction through emotionally resonant social media messaging, helping millions of people rally around climate advocacy. On the other hand, digital networks can contribute to climate anxiety, spread misinformation, further disconnect people from Nature, and lead to affective polarization. AI-generated content can reinforce such processes and thus deepen societal divisions on climate issues.
In an era in which AI contributes to shaping how we feel about climate change, the question is not whether technology influences our emotions—but how we understand its role in shaping our emotional connection to the planet, to future generations, and to one another.
This text builds on the article by Galaz, V., et al. (2025).41
Bibliography
- Galaz, V. et al. AI could create a perfect storm of climate misinformation. Preprint at https://doi.org/10.48550/ARXIV.2306.12807 (2023).
- Schäfer, M. S. The Notorious GPT: science communication in the age of artificial intelligence. Sci. Commun. 22, (2023).
- Alvarez, A. et al. Science communication with generative AI. Hum. Behav. 8, 625–627 (2024).
- Moser, S. C. Communicating climate change: history, challenges, process and future directions. WIREs Clim. Change 1, 31–53 (2010).
- Markowitz, E. M. & Guckian, M. L. Climate change communication. in Psychology and Climate Change 35–63 (Elsevier, 2018). doi:10.1016/B978-0-12-813130-5.00003-5.
- Fan, A. et al. ELI5: Long Form Question Answering. in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (eds Korhonen, A., Traum, D. & Màrquez, L.) 3558–3567 (Association for Computational Linguistics, Florence, Italy, 2019). doi:10.18653/v1/P191346.
- Biyela, S. et al. Generative AI and science communication in the physical sciences. Rev. Phys. 6, 162–165 (2024).
- Curto, G. et al. Africa leading the global effort for AI that works for all. Afr. d44148-025-00141–1 (2025) doi:10.1038/d44148-025-00141-1.
- United Nations. Human Development Report 2025. Human Development Reports https://hdr.undp.org/content/human-development-report-2025 (2025).
- Capraro, V. et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus 3, pgae191 (2024).
- Kasneci, E. et al. ChatGPT for good? On opportunities and challenges of large language models for education. Individ. Differ. 103, 102274 (2023).
- Liang, T. et al. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. Preprint at https://doi.org/10.48550/arXiv.2305.19118 (2024).
- Ryo, M. Unpublished.
- Ning, L. et al. User-LLM: Efficient LLM Contextualization with User Embeddings. in Companion Proceedings of the ACM on Web Conference 2025 1219–1223 (Association for Computing Machinery, New York, NY, USA, 2025). doi:10.1145/3701716.3715463.
- Viswanathan, S., Mohammed, O., Vezina, C. & Doces, M. Exploring Climate Awareness and Anxiety in Teens: An Expert-Driven AI Perspective. in Climate Change AI (Climate Change AI, 2024).
- Wu, S., Fei, H., Qu, L., Ji, W. & Chua, T.-S. NExT-GPT: Any-toAny Multimodal LLM. Preprint at https://doi.org/10.48550/ arXiv.2309.05519 (2024).
- Luera, R. et al. Personalizing Data Delivery: Investigating User Characteristics and Enhancing LLM Predictions. in Companion Proceedings of the ACM on Web Conference 2025 1167–1171 (Association for Computing Machinery, New York, NY, USA, 2025). doi:10.1145/3701716.3715452.
- Seo, J., Kamath, S. S., Zeidieh, A., Venkatesh, S. & McCurry, MAIDR Meets AI: Exploring Multimodal LLM-Based Data Visualization Interpretation by and with Blind and Low-Vision Users. in Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility 1–31 (Association for Computing Machinery, New York, NY, USA, 2024). doi:10.1145/3663548.3675660.
- Gao, Y. et al. Retrieval-Augmented Generation for Large Language Models: A Survey. Preprint at https://doi. org/10.48550/arXiv.2312.10997 (2024).
- Vaghefi, S. A. et al. ChatClimate: Grounding conversational AI in climate science. Earth Environ. 4, 480 (2023).
- Koldunov, N. & Jung, T. Local climate services for all, courtesy of large language models. Earth Environ. 5, 13 (2024).
- Faiaz, Md. A. & Nawar, N. Short Paper: AI-Driven Disaster Warning System: Integrating Predictive Data with LLM for Contextualized Guideline Generation. in Proceedings of the 11th International Conference on Networking, Systems, and Security 247–253 (Association for Computing Machinery, New York, NY, USA, 2025). doi:10.1145/3704522.3704549.
- Harrington, C. N., Klassen, S. & Rankin, Y. A. “All that You Touch, You Change”: Expanding the Canon of Speculative Design Towards Black Futuring. in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems 1–10 (Association for Computing Machinery, New York, NY, USA, 2022). doi:10.1145/3491102.3502118.
- Fergnani, A. Corporate Foresight: A New Frontier for Strategy and Management. Manag. Perspect. 36, 820–844 (2022).
- Transforming the Future: Anticipation in the 21st Century. (Routledge, London, 2018).
- Merrie, A., Keys, P., Metian, M. & Österblom, H. Radical ocean futures-scenario development using science fiction prototyping. Futures 95, 22–32 (2018).
- Cork, S. et al. Exploring Alternative Futures in the Anthropocene. Rev. Environ. Resour. 48, 25–54 (2023).
- Moore, M.-L. & Milkoreit, M. Imagination and transformations to sustainable and just futures. Sci. Anthr. 8, 081 (2020).
- Tyszczuk, R. & Smith, J. Culture and climate change scenarios: the role and potential of the arts and humanities in responding to the ‘1.5 degrees target’. Opin. Environ. Sustain. 31, 56–64 (2018).
- Durán, A. P. et al. Bringing the Nature Futures Framework to life: creating a set of illustrative narratives of nature futures. Sci. https://doi.org/10.1007/s11625-02301316-1 (2023) doi:10.1007/s11625-023-01316-1.
- Lübker, H. M. et al. Imagining sustainable futures for the high seas by combining the power of computation and narrative. Npj Ocean Sustain. 2, 4 (2023).
- Carvalho, P. How Generative AI Will Transform Strategic Foresight. Insight&Foresight https://www.ifforesight.com/ post/how-generative-ai-will-transform-strategic-foresight (2024).
- Messeri, L. & Crockett, M. J. Artificial intelligence and illusions of understanding in scientific research. Nature 627, 49–58 (2024).
- Muccione, V. et al. Integrating artificial intelligence with expert knowledge in global environmental assessments: opportunities, challenges and the way ahead. Environ. Change 24, 121 (2024).
- Banerjee, S., Agarwal, A. & Singla, S. LLMs Will Always Hallucinate, and We Need to Live With This. Preprint at https:// doi.org/10.48550/arXiv.2409.05746 (2024).
- Wachter, S., Mittelstadt, B. & Russell, C. Do large language models have a legal duty to tell the truth? R. Soc. Open Sci. 11, 240197 (2024).
- Bengio, Y. et al. International AI Safety Report. Preprint at https://doi.org/10.48550/arxiv.2501.17805 (2025).
- Moser, S. C. Reflections on climate change communication research and practice in the second decade of the 21st century: what more is there to say? WIREs Clim. Change 7,
- Lamont, M. Seeing Others: How to Redefine Worth in a Divided World. (Penguin Books, S.l., 2024).
- Badullovich, N. From influencing to engagement: a framing model for climate communication in polarised settings. Environ. Polit. 32, 207–226 (2023).
- Galaz, V. et al Artificial intelligence, digital social networks, and climate emotions. Npj Clim. Action 4, 23 (2023)
Explore all chapters
About the authors
Masahiro Ryo is professor of Environmental Data Science at the Leibniz Centre for Agricultural Landscape Research where he leads the working group "Artificial Intelligence".
Victor Galaz is an associate professor in political science at Stockholm Resilience Centre at Stockholm University. He is also programme director of the Beijer Institute’s Governance, Technology and Complexity programme.
Andrew Merrie is a research liaison officer at the Centre.

