Conclusions and recommendations

"AI silicon clouds collage" by Catherine Breslin & Tania Duarte via betterimagesofai.org. (CC-BY 4.0).

The planet on which we live is undergoing rapid changes. The conversion of land to agricultural and urban areas, the extraction of natural resources, the loss of biodiversity, and rising carbon emissions are altering planetary processes on land and in the oceans in ways that increase risks to people all over the world. Novel digital technologies and advancements in artificial intelligence (AI) have profoundly reshaped social interactions, resulting in a more connected global population, but simultaneously creating a diversity of novel risks.

Science plays a key role in this rapidly changing context. It provides evidence of how planetary changes could affect people’s lives, offers analyses for making informed decisions to mitigate risks and reduce social vulnerabilities, and spurs the innovation needed to drive social transformations. At its best, science can guide decisions that help societies move toward a safe future for all. Exploring such pathways is far from straightforward, however. Analyzing the multiple and interacting complexities of human behavior, ecosystems, a changing climate, and technological change remains a considerable research challenge.

While AI is already driving breakthroughs in fields from the life sciences to mathematics, its role in sustainability research is currently less established. This raises a critical question: could AI be effectively applied to the complex, interconnected challenges of sustainability?

This concluding chapter is structured in three parts. The first part summarizes an internal survey where we assess and rank the different issue areas according to their present level of AI maturity (i.e., access to data, and application of AI methods), as well as their wider accessibility. The second part summarizes high-level insights from our analysis. The third and last part ends with a list of recommendations to colleagues in academia, to governments, the public sector, entrepreneurs, and to philanthropic funders.

Comparing the Issue Areas

The use of AI methods differs considerably between the eight chosen issue areas, as shown by our literature overview and deeper analyses of each area (fig. 4a-h). In general, classical machine learning methods are more commonly used in areas exploring complex interactions in climate and nature (e.g., Understanding a Complex Earth System, Securing Freshwater for All, and Stewarding Our Blue Planet), while newer AI methods are more frequent in areas where stakeholder participation and interactions are more prevalent (e.g., Prospering on an Urban Planet, Improving Sustainability Science Communication, and Collective Decisions for a Planet Under Pressure).

How do these issue areas compare in terms of their AI capabilities, and how accessible are insights derived from such analyses to actors outside of academia? Which areas seem to have made the fastest progress? And which ones are in need of a stronger push to be able to fulfill their potential? These questions all relate to comparisons between the issue areas, thus offering insights into different levels of maturity and availability of interest to both researchers and funders.

Zoom image

Fig. 8. Overall evaluation across issue areas. Each participating author evaluated the areas along three key dimensions: (1) data access and quality, (2) model performance, and (3) accessibility for non-experts, i.e., the extent to which different stakeholders (such as NGOs, civil society groups, and local communities) can use or develop AI tools to address the issue. Authors rated each dimension on a scale from 1-5, or opted out of rating when
appropriate. The complete distribution of scores is available in Appendix 6. Produced by Erik Zhivkoplias.

We bring together data from internal expert assessments of these three dimensions—data availability and quality; AI model performance; and the accessibility of AI models and insights to non-expert communities (e.g., NGOs, civil society, local communities). The assessment builds on an online survey sent out to each contributing author of this report in the final part of the writing. The literature overview and review of each issue area thus informed the individual assessments. Between 15 and 23 experts contributed depending on the issue area, forming the joint assessment (from here on referred to as “the assessment”) seen in Figure 8.*

The assessment shows several notable findings. Some issue areas stand out as more mature, characterized by relatively high scores along the dimensions of AI-model performance, and data availability, and quality. This grouping includes the areas Understanding a Complex Earth System, Sustainability Science Communication, Securing Freshwater, and Urban Planet. In contrast, there are issue areas where data availability is comparatively sparse, and where AI-methods development is viewed to be less advanced. This includes the area Collective Decisions for a Planet under Pressure in particular, which scored lowest on both dimensions (see Figure 8). These areas are likely to require a step-change increase ininvestment for research and applications (including data infrastructure, compute and expertise) to enable innovation with AI-methods at scale.

Advancements in Large Language Models (LLMs), along with their increasing accessibility to the public may, we believe, have increased the availability of AI for science communication. This trend aligns with the relatively high accessibility grade given to the Sustainability Science Communication issue area. It is noteworthy, however, that two crucial issue areas Preparing for a Future of Interconnected Shocks and Understanding a Complex Earth System were graded to provide low accessibility to users outside of academia, such as NGOs, civil society, and local communities. These latter areas thus offer an opportunity for funders to invest in ways that help bridge the gap between ongoing AI research for sustainability, and its uses by society. Increased collaborations across domains and targeted research initiatives may drive scientific advances and innovative uses of AI across, and between, issue areas. In summary, there is a need to bring more collaboration into the community, including knowledge sharing, to ensure that investment is flowing to the areas of biggest impact, especially where critical issues often lie at the intersection of different domains.

Key insights

1. AI offers vast potential across the sustainability sciences

Recent advancements in AI, combined with increasingly powerful computing capabilities and methodologies, are enabling the discovery of patterns and relationships within vast datasets from various fields of science, leading to intriguing scientific discoveries. This capacity for data integration, generation, and analysis across global and local sustainability domains allows for insights that were previously too complex, or too interdisciplinary, to tackle. This capability extends to diverse data sources enabling automated feature extraction in realms ranging from marine acoustics and species identification, to water resources monitoring and urban climate risk mapping (see Issue Areas). Hybrid simulation-AI models, which combine AI algorithms with process-based models, are proving particularly powerful, with reduced computational cost compared to traditional methods (p. 16, 19, 34, Theme box 3). There are, however, large differences in capabilities between the different analyzed issue areas (p. 65-67).

2. AI can inform decisions and help communicate complex sustainability challenges

AI can bolster decision-making and science communication for sustainability. Examples include AI-augmented analysis supporting marine protected area (MPA) management (p. 39), water resource planning and management (p. 41-43), and the development of early warning systems for hazards and conflicts (p. 27-29). Large language models (LLMs) and multimodal language models (MLLMs) can help personalize, localize, and make complex scientific information more accessible, engaging, interactive, and actionable for diverse audiences (p. 57).

3. Environmental footprint, biases, hallucinations, and unequal access to AI pose sustainability risks

While AI as a research method for sustainability holds important potential, the increasing use of AI in society in general, particularly for generative AI, demands energy and freshwater across the whole supply chain, leading to increased CO₂ emissions and e-waste, and often impacting already vulnerable communities. This growing footprint, including toxic substances from e-waste, and water withdrawals for energy production and/or cooling in water-scarce regions, represents a sustainability risk and climate justice issue on its own, thus requiring serious consideration and mitigation (p. 10-11). Realizing AI’s potential for sustainability will require overcoming issues of data scarcity, quality, and geographical imbalance, which are particularly acute in low-income countries (p. 12). Underrepresentation of low- and middle-income countries in AI research, design, and use may perpetuate existing inequalities. Algorithmic biases and “hallucinations” introduce substantial ethical and accuracy challenges, and can lead to allocative harms that disproportionately affect vulnerable groups (p. 11-12). Interpretability and vast computational demands are major hurdles for AI adoption and trust. Complex deep learning architectures until now often function as “black boxes,” making it difficult to understand the causal mechanisms behind AI predictions and acting as a barrier to trust for decision-makers and users (p. 11). Training and optimizing these models, especially foundation models, require vast computational resources, creating access and usability challenges, but also opportunities once these models are made accessible (see Theme Box 1).

4. Responsible uses of AI for sustainability research are possible

As all individual chapters illustrate, addressing risks associated with uses of AI for sustainability research is possible. Responsible AI development for sustainability requires fostering interdisciplinary collaboration; augmenting rather than replacing human expertise; supporting academic freedom, peer review, and open science; and establishing robust governance mechanisms addressing AI’s limitations proactively. Several pioneering private, science, and civil society-led initiatives have emerged in this domain. This includes, for example, tools and methods for the development of responsible foundation models; consensus checklists to help advance valid, reproducible, and generalizable AI-based science; and Indigenous Data Sovereignty principles. The co-production of knowledge and benefit-sharing with local communities, decision-makers, and others offer ways to empower and augment human expertise, requiring careful selection of problems and rigorous evaluation.

5. Breakthroughs in sustainability research driven by AI are within our reach and essential for our collective future

Current applications of AI for sustainability-related research are extensive and diverse, and show that an “AI for Sustainability Science” community has begun to emerge.

Its diversity in focus and expertise (see Issue Area chapters), as well as in different levels of AI maturity (p. 65-66), can be viewed as a source of innovation. The potential is substantial, and could transform existing areas of sustainability science, and open up new research areas and practical applications. The growth in public and private investments in AI hardware, AI research, and “AI for Science” in general, however, will not drive scientific breakthroughs of relevance for sustainability at the pace and scale needed. Lack of technical expertise, bottlenecks in access to AI compute, and limited collaborations between AI scientists, developers, and sustainability researchers create obstacles that limit the current potential of AI. Supporting cross-domains collaborations; scaling up AI funding for integrated biodiversity, climate, and social-ecological research; developing clear guidelines for responsible AI development and use for sustainability; and building ambitious international research programs across domains could lead to important advances that benefit people and the planet.

Key recommendations for different stakeholders

Researchers

  • Explore a diversity of AI methods across various sustainability issue areas. Build on existing work in related areas to lay a foundation for an agenda for AI for the Sustainability Sciences. Foster inter- and transdisciplinary collaboration among ecologists, social scientists, Earth System scientists, and computer scientists.
  • Address AI limitations and risks explicitly. Pioneer work that helps reduce the environmental footprint of AI (energy, water, e-waste) and develop methods to mitigate these impacts, including rebound effects. Investigate algorithmic biases and develop methods for mitigating “hallucinations” in LLMs, especially their impact on vulnerable groups and accuracy in sustainability contexts. Develop more interpretable and explainable AI (XAI) models to address the “black box” problem. If outsourced, secure data transparency and ensure that data labelling is done by service providers that offer fair and just working conditions.
  • Counter data gaps, and geographical and social imbalances. Develop methods for few-shot learning (enabling accurate model predictions with limited data), self-supervised approaches, and data integration techniques. Prioritize collaborations in low- and middle-income countries (LMICs) and under-represented regions, and with under-represented communities, to counteract imbalances. Ensure equitable representation in AI model development and use.
  • Champion responsible AI principles. Emphasize that AI should augment, not replace, domain expertise, including local knowledge. Focus research on AI tools that empower scientists, decision-makers, and stakeholders. Advocate for open access to training data and source code, and rigorous evaluation of AI models, to foster transparency, replicability, and trust-building.

Private sector

  • Invest in research and development for AI solutions that bridge science and real-world action. Expand the scope of work by private actors to include aspects of planetary change beyond energy and carbon emissions, for example, biodiversity loss, deforestation, and ocean acidification. Include integrated approaches that connect biosphere changes and human responses. Prioritize the development of interpretable AI systems, particularly for decision-support tools, to build trust among users by making AI models understandable.
  • Mitigate the environmental footprint of AI and other risks. Prioritize the development of energy-efficient AI algorithms and hardware, particularly for large models and data centers. Implement sustainable practices for managing e-waste and limiting water use.
  • Identify and mitigate algorithmic biases and "hallucinations". Implement rigorous testing and validation protocols in AI systems, particularly those used for sustainability applications. Invest in developing universal industry standards and best sustainability practices for AI safety and ethics.
  • Invest in AI that matter to people and the planet. Develop investment strategies that consider the material and carbon footprint of AI. Support the next generation of AI companies that explicitly focus on addressing climate and sustainability challenges.

Public agencies

  • Invest in opportunities for AI for the Sustainability Sciences. Establish interdisciplinary centers of excellence that bring together AI scientists, AI entrepreneurs, and sustainability researchers. Include programs that provide AI expertise and compute to researchers in the sustainability science domain. Support innovation initiatives that allow local communities and civil society to access independent AI expertise..
  • Secure robust AI governance for sustainability. Establish and enforce governance frameworks that include ethical guidelines for AI in the sustainability domain, grounded in the principles of fairness, accountability, transparency, ethics, and sustainability (FATES).
  • Implement policies for environmental sustainability. Governments and public regulatory agencies must ensure that the growth of AI and data centers does not compromise environmental sustainability. Mandate and enforce transparency requirements, and extend the producer responsibility for hardware to manage e-waste. Set water efficiency standards for cooling systems to conserve local water resources. Require companies to source energy exclusively from renewable sources and to conduct environmental impact assessments (EIA). Protect vulnerable communities from allocative harm.
  • Enhance data sharing, collaboration, and capacity to use AI. Engage in open collaboration with academia and public agencies, sharing data and code where appropriate, to advance responsible AI development. Utilize AI tools to enhance public engagement and communication on climate and sustainability, adapting messages to diverse audiences and local contexts. Invest in collecting and curating high-quality, representative datasets, especially from data-scarce regions. Promote open access to training data and source code to ensure replicability and reduce bias. Invest in shared infrastructure and promote access to high-performance computing for sustainability applications.

Philantropies

  • Support inter- and transdisciplinary collaborations for AI for sustainability. Invest in research and initiatives that explore the potential of AI across diverse sustainability issue domains, particularly those that bridge scientific knowledge with real-world action. Direct investments toward underrepresented domains, for example, AI in the intersection of climate, health, and biodiversity; and AI that advances areas where the immediate return on investment is not obvious, but addresses fundamental planetary challenges.
  • Support data infrastructure and insights for action. Support projects focused on development and implementation of AI for enhanced monitoring. Allow all sectors of society to tackle the repercussions of rapid climate and environmental change.
  • Invest in research into the sustainability implications of AI. Focus on algorithmic bias, hallucinations, fairness, equitability, and the prevention of allocative harms. Support independent auditing and oversight mechanisms for AI systems in the sustainability domain.
  • Mitigate social and geographical bias. Support efforts to make computational resources and robust AI tools broadly accessible to researchers, organizations, and agencies in resource-limited settings. Ensure that underserved communities benefit from advancements in AI for sustainability.
Published: 2025-11-05
Explore all chapters
About the authors

Victor Galaz is an associate professor in political science at Stockholm Resilience Centre at Stockholm University. He is also programme director of the Beijer Institute’s Governance, Technology and Complexity programme.

Timon McPhearson is director of the Urban Systems Lab in New York, professor of urban ecology at The New School's Environmental Studies Program, and research faculty at the Tishman Environment and Design Center.

Samuel T. Segun is Senior Researcher at the Global Center on AI Governance.

Jonathan Donges is a researcher at Stockholm Resilience Centre and deputy lead of the Earth Resilience Science Unit at the Potsdam Institute for Climate Impact Research.

Ingo Fetzer is a researcher at Stockholm Resilience Centre.

Elizabeth Tellman is assistant professor at the Nelson Institute of Environmental Studies, University of Wisconsin- Madison.

Magnus Nyström is professor and science director at Stockholm Resilience Centre.

*The interpretation of the survey results should be approached with caution. The assessment is based solely on responses from members of the author team. Moreover, the grading reveals a tangible spread across the assessed dimensions (see Appendix 6 for details on the survey methodology and results). The spread, as seen in the distribution of values, indicates several potential systemic issues. These include: limited cross-disciplinary insight among experts; the rapid pace of technological advancements outstripping the collective ability to integrate these advances into ongoing research; and insufficient coherence and integration across research communities in the different issue areas. The diversity of responses may also imply a lack of awareness of developments in the other issue areas. To strengthen systemic analyses of this kind and complement the area deep dives, future assessments should involve a larger and more geographically diverse group of experts. Such assessments should be followed up on a regular basis to monitor the development and maturity of research areas over time, and ensure representation from researchers from low- and middle-income countries in Asia, Africa, Latin America, and Oceania.

News & events