Sustainability risks and allocative harms

"Distorted Sand Mine" by Lone Thomasky & Bits&Bäume via betterimagesofai.org. (CC-BY 4.0).
AI offers numerous opportunities for climate and sustainability research and action, but which are the most pertinent risks from a sustainability perspective?
Artificial intelligence (AI), autonomous systems, and associated technologies are increasingly being presented as disruptive technologies with the ability to fundamentally change research, society, and industry. How to develop such technologies responsibly, and how to best monitor and govern possible risks, has led to a growing body of scientific literature. This includes research on AI ethics,1 law,2 sociology,3 allocative harms that affect vulnerable social groups,4-6 and the political economy of an increasingly digitized and data-driven world7,8—just to mention a few.
A number of sustainability risks and ethical challenges emerge from increased uses of AI.9 We discuss some of these below but would like to emphasize that the list is not exhaustive. Sustainability scientists, large research programs, and research centers planning on using AI in their work need to consider each of these risks, while also looking into other potential ones, and mitigate harmful effects.
Material, carbon, and water footprint
As the use of AI as a consumer product increases—for example in digital services like chatbots, and the generation of synthetic content like videos and text—so does the ecological and climate footprint of its underpinning infrastructure. This includes the amount of e-waste, such as obsolete specialized processors (graphics processing units [GPUs], tensor processing units [TPUs]), data center infrastructure (servers, hard drives, networking equipment), and end-of-life AI devices (robots, autonomous vehicles, smart sensors).
E-waste represents a growing risk as it is known to release toxic substances like lead, mercury, and cadmium into the environment. These pollutants infiltrate soil and water sources, and can lead to significant ecological degradation and health risks for communities near e-waste disposal sites.10 Communities affected by e-waste pollution often lack adequate legal protection or avenues for recourse against corporations responsible for harm to the environment and, indeed, living beings.
The impacts of AI training and use are also associated with notable increases in energy use and resulting CO₂ emissions.11,12 Calculating the resulting emissions has proved challenging,13 but a number of attempts have been made in recent years to assess the carbon footprint of individual AI models, company level emissions, and future estimates at the country level.11 Whether the longer-term emissions of AI development
and use will be net positive remains a contested issue. Some analyses point to potential climate net benefits, provided that AI development and use is matched by targeted investments and supportive climate policies.14,15 Others instead point at secondary and rebound effects, which could lead to growing climate harms despite increased efficiency over time.16,17
There is also a growing understanding and concern regarding the freshwater use associated with AI development and use. Such uses emerge as the result of onsite server cooling (to avoid server overheating) and offsite electricity generation (i.e., through cooling at thermal power and nuclear plants, and expedited water evaporation caused by hydropower plants).18 Recent estimates show that the growing use of AI is likely to lead to increased water use in many parts of the world, at times even in areas that already face water scarcity.19
The expansion of infrastructure needed for AI compute could thus increase the risk of tensions over water rights, land use, and access to rare earth minerals.20 This is an issue that requires further attention, especially considering marginalized and underserved populations that are already disproportionately impacted by climate change-related events such as extreme weather and natural hazards, and resource scarcity resulting from increased energy and water use.
Algorithmic bias and hallucinations
The risks and impacts of possible algorithmic biases and their allocative harm21,22—that is, when opportunities or resources are withheld from certain people or groups—have gained considerable attention in the last few years. Algorithmic biases of this sort can have a number of sources,23 and could emerge in the sustainability domain in the following ways:24
Training data bias could emerge if AI systems are designed with poor, limited, or biased datasets. For example, the use of AI based on deep learning (DL) for precision and decision support can help smallholder agrarian communities adapt to a changing climate. However, algorithmic biases created by data gaps could also put the same communities at serious risk if the system’s recommendations are flawed or are not validated properly with local knowledge and expert opinion.
Transfer context bias could emerge when AI systems designed and trained on one ecological, climate, or social-ecological context are applied to a different, incorrect, or unintended context. While the training data and the resulting model may be developed and suitable for the initial social-ecological situation (say, a large industrial farm in a data-rich context), using it in a different setting (e.g., a small farm) could lead to flawed, biased, and inaccurate results. AI systems built on historical ecological conditions may also fail as ecosystems across land and seascapes shift in unexpected and sometimes irreversible ways.
Even if both the training data and the context in which the algorithm is used are appropriate, their application can still lead to interpretation bias. In this type of bias, an AI system might be working as intended by its designer. The user, however, might not fully understand its utility or might try to infer a different meaning that the system might not support.
Bias and the risks of harm can also be found in new forms of AI. Large language models (LLMs) have demonstrated a well-documented ability to rapidly summarize, tailor, and communicate climate and environmental information (see later chapters). However, their potential to create hallucinations—that is, inaccurate or misleading information, for instance false facts or citations—remains a concern.25,26 LLMs can generate malfunctioning code; refer to or cite non-existent material like legal briefs and articles; produce factually incorrect information; introduce subtle inaccuracies, oversimplifications or biased responses; and make common-sense reasoning failures.
Hallucinations are not only caused by the technical features of LLMs, but also evolve as the result of how humans interact with such AI models through prompting and feedback.27 Several private and public–private initiatives have evolved in recent years to address these issues through technical research and development, and safety protocols (examples in Appendix 1).28–30 The current lack of universal industry standards, as well as agreed-upon and regulated best practices, continues to pose challenges to mitigating harmful uses of LLMs, however.31 In some extreme cases, LLMs have been used to plagiarize or produce low-quality scientific papers and academic books, risking the erosion of integrity and public trust in science.32,33 Such risks should not be taken lightly as sustainability scientists strive to experiment and use AI for research.
Fig. 2. The number of occurrences of the ten most frequently mentioned countries, along with a selection of the least frequently mentioned ones (chosen to illustrate overall geographical distribution), including affiliations of first and second authors, across the >8,500 studies on AI and the sustainability sciences included in our literature review. Large world economies dominate, such as the USA, China, India, and countries in Western Europe. Map produced by Maria Schewenius.
Data Gaps and Bias
Growing volumes of environmental, social, and ecological data are a fundamental prerequisite for the application of AI in, for example, forest biodiversity monitoring34 and climate modeling.35 Environmental and ecological data have well-known limitations, however, both in their temporal coverage, geographical spread, and information about biological and ecosystem diversity.36–39 The rapid growth of data from cell phones and satellites offers vast opportunities to map and respond to social vulnerabilities, such as poverty and malnutrition. Meanwhile, it has become increasingly evident that solutions driven by so-called “big data” and AI analysis can be skewed, since the most disadvantaged populations tend to be the least represented in emerging digital data sources.40 Indeed, as indicated by the results of the literature analysis conducted in preparation for this report (see AI for the Sustainability Sciences—A Literature Review), most research sites and researchers in the field of AI for the sustainability sciences are associated with high- to higher-medium-income countries (e.g., China, the USA, and countries in Europe) (Fig. 2).
Several of the countries that are the least represented in AI for the sustainability sciences research are also projected to experience some of the most profound changes related to the global issues discussed in this report. For example, Niger, which was not observed in the dataset, has one of the most rapidly increasing populations in the world. It’s projected to double from 26 million people in 2024 to 56 million by 2054.41 Tuvalu and the Maldives are examples of low-lying islands that are particularly vulnerable to extreme sea-level rise, but had zero and two occurrences, respectively.42
Bibliography
- Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. The ethics of algorithms: Mapping the debate. Big Data & Society 3, 2053951716679679 (2016).
- Pasquale, F. New Laws of Robotics - Defending Human Expertise in the Age of AI. (The Belknap Press of Harvard University Press, Cambridge, Massachusetts, and London, England, 2020).
- Brayne, S. Big Data Surveillance: The Case of Policing. Am Sociol Rev 82, 977–1008 (2017).
- Eubanks, V. Automating Inequality - How High-Tech Tools Profile, Police, and Punish the Poor. (MacMillan Publishers and St Martin’s Press (imprint), 2018).
- Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453 (2019).
- Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 (Association for Computing Machinery, New York, NY, USA, 2021). doi:10.1145/3442188.3445922.
- Pasquale, F. The Black Box Society - The Secret Algorithms That Control Money and Information. (Harvard University Press, 2016).
- Zuboff, S. The Age of Surveillance Capitalism - The Fight for a Human Future at the New Frontier Of Power. (2019).
- Gaffney, O. et al. The Earth alignment principle for artificial intelligence. Nat Sustain 8, 467–469 (2025).
- Srivastav, A. L. et al. Concepts of circular economy for sustainable management of electronic wastes: challenges and management options. Environ Sci Pollut Res 30, 48654–48675 (2023).
- Energy and AI – Analysis - IEA. https://www.iea.org/reports/energy-and-ai.
- Electricity 2024 – Analysis. IEA https://www.iea.org/reports/electricity-2024 (2024).
- Luccioni, S. et al. Light bulbs have energy ratings — so why can’t AI chatbots? Nature 632, 736–738 (2024).
- Luers, A. Net zero needs AI — five actions to realize its promise. Nature 644, 871–873 (2025).
- Stern, N. et al. Green and intelligent: the role of AI in the climate transition. npj Clim. Action 4, 56 (2025).
- Luccioni, A. S., Strubell, E. & Crawford, K. From Efficiency Gains to Rebound Effects: The Problem of Jevons’ Paradox in AI’s Polarized Environmental Debate. in Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency 76–88 (2025). doi:10.1145/3715275.3732007.
- Galaz, V. Dark Machines: How Artificial Intelligence, Digitalization and Automation is Changing Our Living Planet. Taylor & Francis. doi:10.4324/9781003317814.
- Ren, S. How much water does AI consume? The public deserves to know. https://oecd.ai/en/wonk/how-much-water-does-ai-consume.
- Nicoletti, L., Ma, M. & Bass, D. AI is draining water from areas that need it most. Bloomberg Technology + Green https://www.bloomberg.com/graphics/2025-ai-impacts-data-centers-water-data/ (2025).
- Vipra, J. Computational Power and AI. AI Now Institute https://ainowinstitute.org/publications/compute-and-ai (2023).
- Barocas, K., Crawford, K., Shapiro, A. & Wallach, H. The problem with bias: From allocative to representational harms in machine learning. in (Philadelphia, 2017).
- Kheya, T. A., Bouadjenek, M. R. & Aryal, S. The Pursuit of Fairness in Artificial Intelligence Models: A Survey. Preprint at https://doi.org/10.48550/arXiv.2403.17333 (2024).
- Danks, D. & London, A. J. Algorithmic Bias in Autonomous Systems. in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 4691–4697 (International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, 2017). doi:10.24963/ ijcai.2017/654.
- Galaz, V. et al. Artificial intelligence, systemic risks, and sustainability. Technology in Society 67, 101741 (2021).
- International AI Safety Report 2025. UK https://www. gov.uk/government/publications/international-ai-safety-report-2025/international-ai-safety-report-2025.
- Banerjee, S., Agarwal, A. & Singla, S. LLMs Will Always Hallucinate, and We Need to Live With This. Preprint at https:// doi.org/10.48550/arXiv.2409.05746 (2024).
- Wachter, S., Mittelstadt, B. & Russell, C. Do large language models have a legal duty to tell the truth? Soc. Open Sci. 11, 240197 (2024).
- Department for Science, Innovation and Technology. Frontier AI Safety Commitments, AI Seoul Summit 2024. https:// www.gov.uk/government/publications/frontier-ai-safetycommitments-ai-seoul-summit-2024/frontier-ai-safetycommitments-ai-seoul-summit-2024 (2025).
- Frontier Model Forum. Frontier Model Forum https:// www.frontiermodelforum.org/ (2025).
- FACTS team. FACTS Grounding: A new benchmark for evaluating the factuality of large language models. Google DeepMind - Responsibility and Safety https://deepmind. google/discover/blog/facts-grounding-a-new-benchmarkfor-evaluating-the-factuality-of-large-language-models/ (2024).
- International AI Safety Report 2025. UK https://www. gov.uk/government/publications/international-ai-safety-report-2025/international-ai-safety-report-2025.
- What counts as plagiarism? AI-generated papers pose new risks. Nature 644, 598–600 (2025).
- Singh Chawla, D. Fake AI images will cause headaches for journals. Nature (2025) doi:10.1038/d41586-025-01488-z.
- Ouaknine, A., Kattenborn, T., Laliberté, E. & Rolnick, D. OpenForest: a data catalog for machine learning in forest monitoring. Data Science 4, e15 (2025).
- Montero, D. et al. Earth System Data Cubes: Avenues for advancing Earth system research. Data Science 3, e27 (2024).
- Siddig, A. A. H. Why is biodiversity data-deficiency an ongoing conservation dilemma in Africa? Journal for Nature Conservation 50, 125719 (2019).
- Poisot, T. et al. Environmental biases in the study of ecological networks at the planetary scale. Preprint at https://doi. org/10.1101/2020.01.27.921429 (2020).
- Chapman, M. et al. Biodiversity monitoring for a just planetary future. Science 383, 34–36 (2024).
- Rega-Brodsky, C. C. et al. Urban biodiversity: State of the science and future directions. Urban Ecosyst 25, 1083– 1096 (2022).
- Blumenstock, J. Don’t forget people in the use of big data for development. Nature 561, 170–172 (2018).
- UN DESA. World Population Prospects 2024: Summary of Results. (United Nations Department for Economic and Social Affairs, S.l., 2025).
- Oppenheimer, M. et al. Chapter 4: Sea Level Rise and Implications for Low-Lying Islands, Coasts and Communities. in IPCC Special Report on the Ocean and Cryosphere in a Changing Climate 2019 755 (Cambridge University Press, Cambridge, UK and New York, NY, USA, 2019).
- Science in the age of AI | Royal Society. https://royalsociety.org/news-resources/projects/science-in-the-age-of-ai/.
- Lawrence, N. D. & Montgomery, J. Accelerating AI for science: open data science for science. Soc. Open Sci. 11, 231130 (2024).
- Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023).
Explore all chapters
About the authors
Samuel T. Segun is Senior Researcher at the Global Center on AI Governance.
Victor Galaz is an associate professor in political science at Stockholm Resilience Centre at Stockholm University. He is also programme director of the Beijer Institute’s Governance, Technology and Complexity programme.
Maria Schewenius is a PhD candidate at the Stockholm Resilience Centre and the Beijer Institute of Ecological Economics.

