Conservation At the Axis of Artificial Intelligence and Acceleration: Killer Apps for Some, Questions for All
At the beginning of 2025, EPIC - with support from the Walton Family Foundation - embarked on an interview and research process to answer three key questions at the heart of the debate around Artificial Intelligence and environmental conservation:
What are the highest leverage use cases for AI technologies to improve environmental and conservation outcomes?
What are the key barriers preventing governments and public interest organizations from developing or deploying those high impact technologies?
What are the negative impacts on people and the environment that stem from the data centers that power these technologies?
We’ve begun weaving together findings from conversations with conservation organizations, environmental government agencies, data scientists, and analysts that work at the intersection of tech and society. We expanded on and contextualized these interview learnings with a deep desk research process. That process included a broad variety of reports from government, civil society, and corporate stakeholders alongside investigative journalism, strategy documents, open letters, books, podcasts, and a hailstorm of judiciously filtered Hot Takes by LinkedInfluencers.
We are sharing the initial findings and clearest recommendations today as we transition out of our discovery phase. Further analysis and sharing will take place in the coming months with the report to be completed in December of this year.
Questions? Comments? Resources? Lord have mercy, but: LinkedIn Hot Takes?
Send them my way! - cole@policyinnovation.org.
Five Foundational Findings
“Artificial Intelligence” encompasses too broad a set of technologies to be a useful term. Artificial intelligence is a marketing term, not a technology. Understanding the differences in applicability, performance, and efficiency between the multitude of technologies under the umbrella is the first key to better decision making about what to use when and how. According to one of the environmentalist software developers we spoke with, stripping away the marketing hype is critical. The lack of agreed upon definitions makes it much harder to discuss and work with “AI” because their clients aren’t able to clearly articulate what they want. Additionally, the practical capabilities of the technologies are far behind where the hype would say they are. They’ve had clients push them to build “100% AI-focused products” but success cannot be measured by tool selection. Instead, they recommend clients and funders focus on the outcomes they want to see and ask developers whether or not certain technologies might help achieve those outcomes.
The best tools are tightly scoped to their use cases. You can’t build a house with a swiss army knife. These one-size-fits-all approaches become one-size-fits-none traps. Avoid them by identifying and addressing individual needs with specialized tools fit to the purpose. Prioritize excellence for specific outcomes and fill niches where other solutions fail. This is especially true in scientific and environmental instances where general purpose models fail to capture the granularity and accuracy needed for best results.
ACCORDION – “High Impact Applications”
Visual Assessments - The computer vision school of machine learning encodes images into numerical representations allowing the computer to classify and compare visual data mathematically. Think of it like color coding a calendar. Once someone learns what each color represents, they can compare the purpose of two meetings at a glance or quickly grasp the balance of their schedule for the week. We spoke with conservationists using computer vision technology to improve sustainable fisheries management, help farmers measure the impact of their conservation practices through biodiversity tracking, and measure streamflow to improve our understanding of drought, habitat, and climate resilience.
Decision Support - Conservation and environmental decision-support systems are increasingly driven by a suite of technologies under the AI umbrella, ranging from traditional random forest methods to generative time-series forecasts. Regardless of the model architecture, these methods analyze historical data to model current and future conditions. They support a variety of functions, each of which is well-modeled by an example from the U.S. Forest Service’s R&D division: Potential Operational Delineations. Fire services use this model to make critical decisions during active fires, predict future vulnerabilities to take mitigating actions, save and institutionalize the expertise of fire managers, and serve as a system of record to archive actions taken and their resulting impacts. Continuing development on these tools leads towards environmental digital twins, which are high-fidelity models for tracking ecosystem health and safely experimenting with interventions.
Optimization - Often coined as Goal-Driven Systems, these models take a set of inputs, a set of constraints, and an objective to try and discover the most effective way to reach that objective given the inputs and constraints. Frequently concerned with efficient resource management, optimization is relevant across a wide range of conservation and environmental domains. One example, from researchers at the Department of Energy, is eGridGPT which explores optimization models for energy grid management. This project leverages a variety of data scientific methods to more efficiently allocate power and manage load balancing to enable broader use of intermittent green energy like wind and solar. Another exciting - and slightly meta example - comes from computer scientists at the University of Chicago. Concerned about the sustainability impacts of their field, they developed CarbonMin, an optimization model for minimizing the carbon intensity of cloud computing. Specifically tailored for inference requests made to consumer-facing language model products like ChatGPT, it directs those requests to data centers on grids with larger proportions of green energy generation.
Access to data, compute, and tech talent are recurring challenges across conservation organizations. These three inputs are critical for developing and using most technologies under the AI umbrella. Across the sector, demand for sufficiently large, specific, and vetted datasets far outstrips supply. Similarly, corporate capture of compute resources and data science talent make it difficult for public interest institutions to compete.
ACCORDION - “Key Inputs, Costly Competition”
Data: If you’ve been following the AI news cycle, you probably already know how important quality data is and how precarious it can be to use specific data for general functions, and vice versa. In scientific settings this rule becomes law. We heard numerous tales of data generalization woes: whether it was forestry data from Detroit that didn’t work next door in Ann Arbor, or the unfulfilled promise of pre-training an invasive bullfrog croak detector on bird call data. Several scientists brought up the murky waters of intellectual property as it applies to using other researchers’ work to train your own model. In each of these cases, a large, clean, freely accessible data commons for particular scientific use cases would advance training and tuning of AI models by leaps and bounds.
Compute: One researcher we spoke to described the cloud computing cost challenge as “$300,000 per mistake.” Put in those terms, it’s hard to overstate how razor thin the margins are for non-profits and government agencies. Cloud computing is largely controlled by three vendors (Amazon AWS, Microsoft Azure, and Google Cloud) and their share of the market is increasing as they each invest tens of billions of dollars more in data center infrastructure. Agencies with their own largescale computing hardware are hitting their demand limits - like too many cars on the road, there are more tasks sent to the servers than the servers can handle - as more and more fuel is added to the AI-fire. Some can’t effectively furnish computing resources due to asinine internal policies complicating the procedure for staff, or - in one case - entirely restricting the use of enormously powerful computing hardware that sits in the researcher’s personal office five feet away from his laptop. Expanding the computing power available to researchers isn’t cheap, but it is necessary. By investing in that infrastructure and simplifying the processes for using it, governments, nonprofits, and research institutions can enable robust experimentation and rapid development.
Tech Talent: EPIC has done copious work on the tech talent challenges within the federal government, among other constraints on innovation. This issue extends to every level of government and across the civil society sector. It doesn’t help that internecine conflicts between Big Tech CEOs are raising the temperature - and the price - for this talent across the market. Ironically, this comes at the same time as the tech layoff wave that started in the early 2020s continues with Microsoft dropping over 15,000 employees in the last 2 months. It’s devastating that even with all this undoubtedly excellent talent available, non-profits and government agencies simply don’t have the money to hire or resources to support them. This challenge is compounded within government by recent federal executive actions which have resulted in such severe losses of AI-enabling talent that the actors responsible for those cuts are now scrambling to refill their ranks, including attempts to rehire the very people they fired. Amidst that scramble, our own tracking of federal hiring data shows that the vast majority of agencies are effectively operating under a hiring freeze, unable to bring in key talent at a moment of rapid technological evolution. Opening doors for mid and late career technical talent to apply their skills to environmental challenges is a key opportunity, but needs to be balanced with the ongoing development of young technical talent pipelines to ensure that the next generation of conservation technologists can develop and grow their careers.
Open Research and Development processes are critical, but largely absent. They promote sector-wide learning and discovery, deduplication of efforts, and scaling of successful solutions. Interviewees broadly called for support to experiment in controlled environments and opportunities to learn from the experimentation of others.
ACCORDION - “Yearning for Learning”
Open Innovation: Knowledge and resource sharing are critical for accelerating conservation organizations’ understanding and development of these technologies. These pillars could take the shape of data commons mentioned above, collaborative sandboxes or open test beds for experimentation, or purposeful, public knowledge sharing from internal development efforts. These elements help to kickstart good ideas, filter out bad ones, and imagine creative alternatives. Seeding and cultivating a shared corpus of knowledge, tools, methods, and partnership opportunities will empower conservation organizations to grow alongside these technologies.
Technical Trainings and Workshops: Regardless of sector, our interviews, survey, and desk research all showed that environmental and conservation organizations want to learn more about the technologies themselves. Many of our conversations were with institutional experts who quickly described the shortcomings of their knowledge and how deeply they wanted to improve. This is an encouraging sign for a field that is fundamentally scientific. While these individuals recognize the tools’ potential, they also recognize that to use them correctly requires them to learn more first.
Guidance on Ethics, Privacy, and Security: Alongside technical knowledge, respondents highlighted the need for greater understanding of the human and societal implications of these technologies. Some interviewees framed this in regards to their relationship with key partners. For example, many of the agricultural conservation organizations we spoke with noted longstanding trust concerns that farmers hold towards businesses and governments that take and use farmers’ data without compensation or reinvestment. Others placed the stem of this need in broader societal concerns about algorithmic bias and privacy harms. For example, a group of marine biologists working on illegal fishing were concerned that if they used these tools it would expose sensitive information about the people with whom they work, or that information would be analyzed in an unseen and biased manner by the algorithms.
What we know about the environmental impacts of hyperscale computing are alarming, but cloud vendors are throttling access to this information. Computing providers are not transparently reporting key metrics for assessing the environmental and environmental justice impacts of their infrastructure. Like many industries before them, they are deploying significant lobbying power to prevent regulators and advocates from requiring such reporting. We must not tout the benefits of the technologies if we cannot also meaningfully assess the negative impacts. When asking organizations for their thoughts about the current negative environmental impacts of these technologies relative to their potential future benefits, we’ve used the phrase “there’s no use saving a liter of water if it costs a gallon.” While that framing helps launch the conversation, we can’t say with any certainty what those offsets are without the operation-specific information that cloud vendors refuse to share. To inform the populace, promote competition, and protect the planet, government needs to demand or incentivize the sharing of these data.
ACCORDION - “Power Eats the Planet”
Not a Drop to Drink: Google (the smallest of the big-3 cloud providers) recently released their Sustainability Report for 2024. I’m going to pick on them, but - in fairness - I’m only able to because they actually share high-level water consumption information about their cloud computing data centers. Amazon and Microsoft do not. That said: according to Google’s numbers – which, if you track their reporting trends (see pages 110 & 111 of that document) against numbers released via lawsuit, the data are somewhat suspect – they consumed nearly 7.7 billion gallons of potable water for computing. A little back of the napkin math equates that to the domestic water use of a 257,000 person city, which would make them the 91st largest in the United States, right around Glendale, AZ. Incidentally, Glendale is in a region undergoing long-term drought, and significant acceleration in data center development. 7.699 billion gallons in '24(82 gallons per person per day 365 days per year)=257,234 persons of water
Localized Liabilities: More immediately concerning than hyperscale computing’s aggregate environmental impact are the localized effects. Researchers at The Maybe recently released an excellent report containing a set of case studies tracking localized impacts and responses across the world. Closer to home, the illegal methane turbines installed for the xAI supercomputer in South Memphis is a chilling example of the health and environmental rules big tech is willing to break. By their own reporting, Microsoft consumed 640 million gallons of water in water stressed areas in FY24. Bloomberg has found that over 75% of distorted power readings occur within 50 miles of a data center, across both rural and urban grid environments, causing outages and appliance damage for other power consumers in the region.
Clarity, Capacity, and Curiosity
Bottom line? Certain technologies under the AI umbrella have a lot of potential - much of it unrealized at this point - to help environmental organizations pursue their missions, or reduce the time spent on administrative and operational tasks. If you want to accelerate your pursuit of that potential, maintain a clear-eyed, values-driven, and curious approach. Invest in foundational resources and continuous learning. Don’t go it alone, seek out authentic partnerships rather than profit-driven corporate techno-solutionism that deepens power imbalances, leads to vendor lock-in, and treats environmental outcomes as a secondary goal.
Fortunately, there’s a lot more iceberg where that comes from. We’ve just completed coding and analyzing the full set of interviews and I’ll be reading research until they pry it out of my cramped, highlighter-stained fingers. When that work is finished, we will provide specific organizational and policy recommendations on the fastest, safest, and most sustainable avenues for enhancing environmental and conservation action.