Collective decisions for a planet under pressure

"Distorted Lava Flow" by Lone Thomasky & Bits&Bäume via betterimagesofai.org. (CC-BY 4.0).
Securing a just and safe future for all requires cooperation and collective decisions at scale. Can AI facilitate concerted and informed decisions for a planet under pressure?
Introduction
Cooperation and collective decisions are critical for achieving a sustainable future for all. Such cooperation is vital for managing both environmental and social commons, such as fisheries and public infrastructure. Cooperation involves a group’s ability to agree on and work together toward common goals, even when individual selfishness is tempting. The importance of collective action is often seen in social dilemmas where collective benefit is at stake. The study of cooperation spans various disciplines, including biology and social sciences, and identifies mechanisms like external authorities (e.g., governments taxing uncooperative behavior) and social reciprocity to promote cooperative behavior. However, several challenges hinder the emergence of cooperation1:
Large and heterogeneous collectives. Cooperation becomes difficult in large groups due to the lack of effective enforcement mechanisms and the anonymity of participants, making it hard to stabilize reciprocity. Also, heterogeneous groups with differences in preferences and resources are often a challenge for collective action.
Complex human behavior. Understanding the diverse motivations and behaviors of individuals is crucial for fostering cooperation, especially in the context of sustainability and policy changes.
Environmental complexity. The dynamic nature of environments, including feedback loops, thresholds, and uncertainties, complicates the stabilization of cooperative efforts.
Transient dynamics. There is a lack of focus on the temporary phases that lead to cooperation, including the timing and stability of cooperative arrangements, which are essential for sustainability transitions.
In what ways could uses of AI help us explore and better understand the dynamics of human collaboration? Below we discuss a couple of interesting, related areas, including the potential and limitations of AI.
AI for Data Generation and Collection
One key aspect of cooperation and collective decisions is related to the ability of those making decisions—such as state and non-state actors— to act on the best available information. While this might sound straightforward, numerous studies show the difficulties decision-makers face in trying to match collective decisions and institutions to the rate of change in the climate system and ecosystems.2,3
A number of intriguing applications of large language models (LLMs) illustrate the potential to support the synthesis capacities of international institutions,4,5 including the Intergovernmental Panel on Climate Change (IPCC)6. LLMs can also help structure and summarize increasingly information-dense policy areas, such as national climate pledges.5
Scientific syntheses help inform political agendas and public debate, and the use of domain-specific LLMs could thus potentially help speed up the collection and analysis of a growing body of scientific studies and national reports in the climate and sustainability domain.
AI-Assisted Predictive Modeling
There are several ways in which predictive modeling can contribute to collective decision-making, which are presented here:
Mathematical models can be of help to understand and support cooperation. Process-based, mechanistic models facilitate theory development and simulations when traditional experiments are impractical. Approaches that build on complex systems science have enhanced our understanding of how simple agents interact to form larger structures, evident in fields like swarm intelligence and evolutionary games.7,8 However, the individual behaviors in biological, social, and artificial systems are often complex and not easily captured by simplistic models. Here, AI models can help in multiple ways.
Agent-based modeling and artificial life allow for more nuanced insights about individual decision-making and diversity among agents. Multiagent reinforcement learning (MARL) offers a way for agents to learn their behaviors for themselves in dynamic environments. Recent examples of deep MARL applied to collective decision-making and cooperation include the project AI Economist9 and the AI for Global Climate Cooperation initiative.10 These projects implement an inner-loop outer-loop reinforcement learning setting. The outer-loop deep reinforcement learning agent aims to obtain a taxation and subsidies policy and negotiation protocol, respectively, while the inner-loop agents continuously adapt, aiming to maximize their individual welfare. The learned AI Economist policy outperforms several established taxation and subsidies policies in terms of the total welfare achieved and inequality avoided.
Bridging complex systems science and MARL together also offers new possibilities. MARL operationalizes complex cognition processes, while complex systems approaches provide a structured understanding of how cooperation emerges. For example, Barfuss et al. (2020)11 use dynamical systems theory in combination with reinforcement learning agents to analyze how agents learn sustainable collective action under a variety of conditions. Such integration can enhance our understanding of collective decision-making and cooperation, and inform the design of cooperative algorithms and strategies for achieving sustainability. For example, which features of the agents’ perception, internal representation, and decision-making systems are particularly apt to promote cooperative behavior in the face of environmental disaster.
Another way forward is to combine the complex systems frameworks of agent-based models (ABMs) with LLMs. Recent advances in LLMs have enabled a new class of ABMs that combine complex, adaptive decision-making with largescale simulation capacity. These new, generative agent-based models (GABMs) allow agents to reason, communicate, and act in contextually appropriate ways using natural language.12 By capturing both rich individual behavior and social interaction, this approach offers a framework for linking local actions to emergent collective phenomena.
Tools like Concordia, a generative agent-based modeling library, provide a modular framework where agents interact in physical, social, or digital spaces. The agents’ actions are mediated by a Game Master that interprets their language-based intentions and enforces environmental rules.13 Foundation models have been adjusted to capture and predict human cognition across a variety of behavioral experiments.14 Scalable approaches such as LLM archetypes allow for simulating millions of agents with distinct behavioral patterns by grouping similar individuals and sampling representative decisions, as demonstrated in simulations of pandemic response in New York City.15
AI-Assisted Decision-Making
There is a growing discussion about the need to increase the availability of AI tools to non-experts.16 A number of initiatives and projects have been developed in recent years to explore how human-centric, explainable AI approaches can support collective decisions in various ways. LLMs have, as one example, been successfully studied as mediators in deliberative dialogues.17 In urban planning, the Potential allocation of urban development areas framework integrates geographic information systems (GIS) with participatory scenario planning, enabling non-experts to visualize urban development–ecological preservation trade-offs across case studies.18 Participants reported greater confidence in AI-assisted decisions due to clear scenario visualization and collaborative refinement of adaptation pathways, highlighting how these tools helped reconcile competing priorities through accessible interfaces.
Similarly, Utrecht University’s project and framework Human-Centered AI for Climate Adaptation combines interactive visualization and explainable AI to co-design Dutch flood resilience strategies, in which stakeholders dynamically evaluate AI-generated approaches balancing agricultural, urban, and shipping needs.19 User-friendly AI applications that use Bayesian networks,20 as another example, can empower non-experts and multidisciplinary teams to analyze complex sustainability datasets and explore in-silico policy scenarios.
Some of these uses can be informed by predictive analysis that combines AI-augmented modeling with scenario processes. For example, the digital platform ClimateIQ combines physical models and CNNs to help project urban climate risks in ways that allow for local engagements and collaborative climate adaptation planning (see Theme box 3, ClimateIQ: Empowering Communities to Prepare for Climate Threats with Hyperlocal Data). The project ARtificial Intelligence for Environment & Sustainability (ARIES) uses a semantically connected network of AI models that allows users to ask ecosystem service-related questions or build scenarios to support engagements and planning.21,22
Limitations and Key Challenges
While their potential is intriguing, each use of AI listed earlier is associated with known limitations. Uses of deep MARL are associated with high computational costs and difficulties in interpreting learned behaviors, due to the numerous parameters of these simulations. These models can also be difficult to analyze and may suffer from issues like overparameterization, leading to unreliable outputs. As we discussed in the previous chapter Improving Sustainability Science Communication, hallucinations from LLMs also remain a concern.
Both LLM archetypes and GABMs offer new ways to simulate collective decision-making, but each has important limitations. Archetypes simplify individuals into a few representative types, which can obscure meaningful variation within groups. Generative models like Concordia13 produce more detailed behavior by prompting LLMs directly, but these behaviors reflect bias or patterns in training data. This can lead to biased or flattened portrayals, especially of marginalized groups.23 It becomes a problem when the model is wrongly applied to represent a population where the patterns are different, which is most likely in marginalized groups. The meta problem is that we don’t know or cannot say, even, when the patterns are sufficiently different for the application to become a significant problem.
These risks are serious if the models are used to inform real-world decisions, and illustrate the need for AI model transparency and interpretability. However, even though increasing transparency and interpretability in AI model design is important, it does not automatically lead to greater user trust or better decision-making.24,25
It is worth noting, however, that traditional agentbased models also simplify human behavior. The use of novel AI methods may introduce different risks of abstraction, but can, if conducted with nuance and skepticism, help simulate unprecedented complexity and scales of collective behavior.
Lastly, while uses of AI have the potential to help model and inform collective decisions, their proper use is not a guarantee for a successful outcome. Climate and sustainability questions entail a suite of challenging value judgements, distributional issues, and inequalities, which lead to decision-making and political gridlock.
Bibliography
- Barfuss, W. et al. Collective cooperative intelligence. Natl. Acad. Sci. 122, e2319948121 (2025).
- Galaz, V., Olsson, P., Hahn, T., Folke, C. & Svedin, U. The Problem of Fit among Biophysical Systems, Environmental and Resource Regimes, and Broader Governance Systems: Insights and Emerging Challenges. in Institutions and Environmental Change (eds Young, O. R., King, L. A. & Schroeder, H.) 147–186 (The MIT Press, 2008). doi:10.7551/mitpress/7920.003.0011.
- Young, O. R. The Institutional Dimensions of Environmental Change: Fit, Interplay, and Scale. (The MIT Press, 2002). doi:10.7551/mitpress/3807.001.0001.
- Chang, C. H. et al. New opportunities and challenges for conservation evidence synthesis from advances in natural language processing. Biol. 39, e14464 (2025).
- Larosa, F. et al. Large language models in climate and sustainability policy: limits and opportunities. Res. Lett. 20, 074032 (2025).
- Muccione, V. et al. Integrating artificial intelligence with expert knowledge in global environmental assessments: opportunities, challenges and the way ahead. Environ. Change 24, 121 (2024).
- Hofbauer, J. Evolutionary Games and Population Dynamics. (Cambridge University Press, West Nyack, 1998).
- Novak, B. J., Fraser, D. & Maloney, T. H. Transforming Ocean Conservation: Applying the Genetic Rescue Toolkit. Genes 11, 209 (2020).
- Zheng, S., Trott, A., Srinivasa, S., Parkes, D. C. & Socher, R. The AI Economist: Taxation policy design via two-level deep multiagent reinforcement learning. Adv. 8, eabk2607 (2022).
- Zhang, T. et al. AI for Global Climate Cooperation: Modeling Global Climate Negotiations, Agreements, and Long-Term Cooperation in RICE-N. Preprint at https://doi. org/10.48550/ARXIV.2208.07004 (2022).
- Barfuss, W., Donges, J. F., Vasconcelos, V. V., Kurths, J. & Levin, S. A. Caring for the future can turn tragedy into comedy for long-term collective action under risk of collapse. Natl. Acad. Sci. 117, 12915–12922 (2020).
- Ghaffarzadegan, N., Majumdar, A., Williams, R. & Hosseinichimeh, N. Generative Agent-Based Modeling: Unveiling Social System Dynamics through Coupling Mechanistic Models with Generative Artificial Intelligence. Dyn. Rev. 40, e1761 (2024).
- Vezhnevets, A. S. et al. Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia. Preprint at https://doi.org/10.48550/arXiv.2312.03664 (2023).
- Binz, M. et al. A foundation model to predict and capture human cognition. Nature 644, 1002–1009 (2025).
- Chopra, A., Kumar, S., Giray-Kuru, N., Raskar, R. & Quera-Bofarull, A. On the limits of agency in agent-based models. Preprint at https://doi.org/10.48550/arXiv.2409.10568 (2024).
- United Nations. Human Development Report 2025. Human Development Reports https://hdr.undp.org/content/human-development-report-2025 (2025).
- Tessler, M. H. et al. AI can help humans find common ground in democratic deliberation. Science 386, eadq2852 (2024).
- Nizamani, M. M. et al. Ethical AI: Human-centered approaches for adaptive and sustainable urban planning and policy. Land Use Policy 157, 107650 (2025).
- Human-Centered AI for Climate Adaptation - Pathways to Sustainability - Utrecht University. https://www.uu.nl/en/ research/sustainability/human-centered-ai-for-climate-adaptation.
- How, M.-L., Cheah, S.-M., Chan, Y.-J., Khor, A. C. & Say, E.
- P. Artificial Intelligence-Enhanced Decision Support for Informing Global Sustainable Development: A Human-Centric AI-Thinking Approach. Information 11, 39 (2020).
- ARIES - ARtificial Intelligence for Environment & Sustainability. ARIES https://aries.integratedmodelling.org/.
- Home. DAKIS https://adz-dakis.com/en/.
- Wang, A., Morgenstern, J. & Dickerson, J. P. Large language models that replace human participants can harmfully misportray and flatten identity groups. Mach. Intell. 7, 400–411 (2025).
- Schmidt, P., Biessmann ,Felix & and Teubner, T. Transparency and trust in artificial intelligence systems. Decis. Syst. 29, 260–278 (2020).
- Wang, X. & Yin, M. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. in Proceedings of the 26th International Conference on Intelligent User Interfaces 318–328 (Association for Computing Machinery, New York, NY, USA, 2021). doi:10.1145/3397481.3450650.
Explore all chapters
About the authors
Wolfram Barfuss is Professor of Integrated Systems Modeling for Sustainability Transitions at at the Rheinische Friedrich-Wilhelms-Universität Bonn.
Victor Galaz is an associate professor in political science at Stockholm Resilience Centre at Stockholm University. He is also programme director of the Beijer Institute’s Governance, Technology and Complexity programme.
Anna B. Stephenson is a Postdoctoral Research Associate at the High Meadows Environmental Institute at Princeton.
Erik Zhivkoplias is a PhD candidate at Stockholm Resilience Centre.
Jobst Heitzig is a Senior Researcher at the Potsdam Institute for Climate Impact Research where he
leads the working group on Behavioural Game Theory and Interacting Agents.
