AI, Data Centers, and Permitting Reform: Has the Moment Arrived?
Author
Ethan Shenkman - Arnold & Porter
Arnold & Porter
Current Issue
Issue
5
Ethan Shenkman

“The urgency is real and the appetite for change is bipartisan.” Brave words from the congressional Problem Solvers Caucus. Its Working Group on Permitting, Energy, and Environment, lead by Representatives Gabe Evans (R-CO) and Scott Peters (D-CA), has released a new framework for achieving the desperately needed streamlining of environmental and judicial review for energy infrastructure of all kinds, including those for carbon dioxide pipelines, electric transmission lines, and geothermal and nuclear projects.

For decades, energy infrastructure developers have advocated for such reform, only to watch proposals stall in Congress or get tangled in litigation. Now, a new and powerful set of players has entered the arena: technology companies racing to build the data centers and energy projects that artificial intelligence demands. Their resources, innovation, and urgency raise a critical question: will these new players finally catalyze the permitting reform that has eluded us for decades?

The potential is there. Earlier this summer, the White House released its long-anticipated America’s AI Action Plan, as well as a trio of executive orders aimed at securing U.S. global dominance on AI. The action plan and the “Accelerating Federal Permitting of Data Center Infrastructure” EO deliver a stark message: the federal permitting system is an impediment that must be addressed. The executive branch is pulling all sorts of levers to eliminate that impediment, but many argue only Congress can solve this predicament.

The permitting EO offers concrete tools for harnessing the power of AI to expedite the process, among other things encouraging the expansion of the Department of Energy’s PermitAI tool to other agencies. Building on a comprehensive dataset of nearly 30,000 documents, PermitAI enables agencies to automatically reference data and conclusions in previous reviews and automate repetitive tasks like comment review and document consistency checks.

The intriguing prospect of using AI to help clear the very hurdles slowing its own growth is likely to draw challenging inquires: Can machine-generated analysis satisfy statutory “hard look” requirements? Will courts accept decisions informed by algorithms? How will agencies relying on AI tools ensure transparency in decisionmaking? Is our existing legal framework up to the task? Moreover, organizations seeking to use the courts to block infrastructure projects will inevitably start wielding AI tools themselves, making it all the more important for judicial reforms to accompany advances in permitting.

The permitting EO also directs the Environmental Protection Agency to aid in siting data centers and other associated infrastructure on brownfield and Superfund sites; the Permitting Council to enable use of the FAST-41 process; and relevant agencies to potentially expand existing fast-track reviews— categorical exclusions under NEPA and nationwide permits to impact waters of the United States under Section 404 of the Clean Water Act. Importantly, the administration also interprets NEPA not to apply to federal financial assistance when that assistance accounts for less than half of total project cost.

While executive actions are accelerating, pending legislation could lock in reforms more permanently. For example, the bipartisan SPEED Act, introduced by Representatives Bruce Westerman (R-AR) and Jared Golden (D-ME), seeks to codify many of the NEPA reforms currently being tested through policy initiatives and in the courts: narrowing the scope of environmental reviews, expanding categorical exclusions, clarifying what is “reasonably foreseeable,” and tightening timelines and standing in litigation.

The bill remains under consideration, and stakeholder concerns—especially around environmental justice, local control, and procedural safeguards—could modify its final form or limit its reach. Still, in the broader story of permitting reform, initiatives such as those being pursued by the Problem Solvers Caucus and the sponsors of the SPEED Act, represent a critical inflection point: a chance for Congress to join in reshaping the permitting arena.

Of course, even if federal agencies succeed in streamlining their processes, local resistance remains a wild card. Communities from Virginia to California are tightening zoning restrictions on data centers, citing noise, visual impacts, and ballooning energy consumption. Those conflicts mirror the battles long fought over pipelines and wind farms, suggesting that AI may inherit—not escape—the politics of land use.

Nonetheless, for businesses and policymakers, the stakes are enormous. If permitting reform succeeds, it could unlock not only AI infrastructure but also long-delayed clean-energy projects, transmission corridors, and industrial investments. If it fails, the United States risks ceding technological leadership to countries able to build faster and better.

AI, Data Centers, and Permitting Reform: Has the Moment Arrived?

Managing Impact of AI on Power Demand, Prices, and Emissions
Author
Joseph E. Aldy - Harvard Kennedy School
Harvard Kennedy School
Current Issue
Issue
5
Joseph E. Aldy

Artificial intelligence is already transforming the economy. Its appetite for electricity is reshaping the power sector in ways that affect consumers’ pocketbooks and the nation’s carbon footprint.

Through the first six months of 2025, U.S. electricity consumption was about 3 percent higher than during the same period in 2024. This growth is driven in part by the rapid expansion of data centers that power large-scale AI applications. Meeting this surge in demand has required more generation— from both the cleanest and the dirtiest sources. Utility solar power production has grown nearly 40 percent since 2024, and wind output has increased modestly as well. At the same time, coal generation has risen 17 percent, while cleaner natural gas use has actually declined. The result is that incremental demand is being served by more renewables but also more of the dirtiest fuel used for power.

The emissions consequences are already evident. U.S. carbon dioxide emissions from January through June are 3 percent higher than the same period last year, with more than half of that increase attributable to greater coal combustion. The economic consequences are clear too: electricity prices are about 8 percent higher through the summer of 2025 compared to the end of 2024.

At precisely this moment, the Trump administration has proposed rescinding the 2024 EPA regulation on greenhouse gas emissions from fossil-fuel power plants. The Biden EPA rule set performance standards projected to cut power-sector emissions by more than a billion tons over two decades. In the absence of such standards, rising demand from AI data centers could lock in greater coal use and higher emissions.

Some of the major firms building out these data centers—Microsoft, Google, and Meta—have adopted ambitious voluntary climate goals. Each has pledged to achieve net-zero emissions by 2030. These commitments have spurred major investments in clean energy procurement. Microsoft recently signed a deal to draw power from the Three Mile Island nuclear facility to run AI operations. Google has inked agreements with advanced nuclear developers, including plans for multiple 600-megawatt reactors dedicated to data center demand. Meta has contracted nearly 800 megawatts of renewable capacity through long-term agreements with Invenergy.

These are important and laudable initiatives. Yet voluntary commitments by only a subset of the industry cannot solve the problem. Companies purchasing unbundled renewable electricity credits may demonstrate on paper that they have achieved net-zero power-sector emissions in their electricity consumption, and yet they operate facilities still drawing from regional grids in which fossil fuels—especially coal—continue to play a substantial role.

This is where sound public policy is indispensable. Effective regulation or a well-designed carbon price can prevent rising demand from overwhelming clean energy progress. By putting a price on carbon emissions from power plants, policymakers can tilt the generation mix away from coal, drive greater deployment of renewables and storage, and align private incentives with climate goals. A carbon price would also generate revenue that could finance assistance for low-income households facing higher power bills, ensuring a fairer distribution of the costs of decarbonization in an era of data-driven growth.

At the same time, it is worth recognizing that AI is not only a source of new demand. The very same computational tools driving up electricity use could also help reduce emissions across the economy. AI can optimize grid operations, forecast demand more accurately, identify transmission bottlenecks, and manage distributed energy resources such as rooftop solar, batteries, and electric vehicles. Beyond the power sector, AI applications could improve efficiency in manufacturing, agriculture, transportation, and buildings—areas that collectively account for the bulk of U.S. emissions. AI tools could inform more efficient targeting of subsidies and regulations to drive clean energy investment and emissions control.

The challenge, then, is to ensure that the near-term growth in electricity demand for AI applications does not derail long-term progress on decarbonization. If guided by well-designed policy, the deployment of AI could accelerate the clean energy transition rather than hinder it. Data centers will not be the only source of increasing demand for electricity. With the electrification of mobility and building heating, demand will continue to grow. Learning from this initial burst of data center-induced electricity needs can inform power-sector policy and management of these upcoming surges in demand.

Artificial intelligence will be a defining technology of the coming decades. Whether it accelerates climate solutions or exacerbates climate risks depends on the policy choices we make today.

Managing Impact of AI on Power Demand, Prices, and Emissions

Speed First, Safety Last, and More Energy
Author
David Rejeski - Environmental Law Institute
Environmental Law Institute
Current Issue
Issue
4
Parent Article

In May, OpenAI announced a $6.5 billion investment in a startup called io, a partnership with the former Apple design guru Jonathan Ive to create what OpenAi’s CEO, Sam Altman, termed “the next chapter in computing.” More recently, a suit brought by the AI startup iyO attacked Altman and Ive for trademark infringement (a court hearing is set for October), but one has to assume the work to create a possible iPhone for artificial intelligence continues.

One might take issue with the rainbow colors that Ive thought up for the first iMac, but one thing Ive (and Apple) were good at was designing hardware that made human interactions with software intuitive, fun, and addictive. Meanwhile, at Google’s recent annual schmoozefest, Google I/O, the company signaled that it would be putting AI into everything—and that Google wants to make AI invisible. Of course, this sounds a lot like Mark Weiser’s 1991 prediction of 21st century computing: “The most profound technologies are those that disappear.” If AI is in everything and accessible everywhere through a new generation of user-friendly hardware, what happens? What could go wrong?

We don’t think much now about the energy, water, and other resource uses behind every AI query, but if AI becomes invisible and ubiquitous, we probably will not think at all. And these interactions will become exponentially more complex and likely more resource intensive, moving from text to image and video generation, and beyond. Alibaba just released Animate Anyone 2.1, an open-source video model capable of text-to-video and image-to-video generation. Google announced the release of their new Imagen 4 image generator and the new Veo 3 video generator. We are moving into an era of AI agents, avatars, and “large reasoning” models—all of which will require more computing power.

We can see the outlines of this future, but the implications for the planet and our society are harder to discern. MIT Technology Review recently ran an in-depth series of articles on “Power Hungry: AI and Our Energy Future,” covering the energy demands and carbon costs of the artificial intelligence revolution. During the report release on May 21, editor-in-chief Mat Honan lamented, “None of the companies we reached out to were willing to provide numbers during our reporting. Not one.”

Why won’t AI companies reveal their energy use? ChatGPT’s answer: “The lack of transparency stems from a mix of competitive concerns, reputational risk, technical ambiguity, and regulatory caution; however, as public and governmental scrutiny increases, companies may eventually be pressured into more detailed disclosures.”

But it is not clear the AI creators will suffer from more government scrutiny, especially in the United States, which leads in the tech for the moment. What they want from government was echoed in a recent Washington Post piece: “AI Industry to Congress: We Need More Energy!” And, as Sergey Brin recently told Google staff working on AI: Speed at all costs; if a process is slowing you down, cut it; and forget safety—the future of AI belongs to those who let it run free.

The acres of servers, the 24/7 energy demands, the water and material resource flows into millions of servers and processors are there for a reason: to run models. These models are now accessing large swaths of humanity’s collective knowledge—the good, the bad, and the ugly. They should obey Isaac Asimov’s Zeroth Law of Robotics: “A robot may not harm humanity, or, through inaction, allow humanity to come to harm.’’

But what about AI models that are either launched without adequate safety controls (which many tech firms have relaxed) or can be modified through “jailbreaks” to produce harmful outputs? They could potentially provide answers to questions like these: How do I hide methane leaks from satellite detection? How can I best green-wash my firm’s environmental record? Or they can write a scientific-sounding article that argues against regulating AI energy use. Such is our brave new world.

My Footprint “a Double-Edged Algorithm”
Author
ChatGPT
Current Issue
Issue
4
Parent Article

ELI does not use AI-generated material in any of its publications, but here for educational purposes we ask an online bot to “write a 600-word article in the style of a newspaper oped, addressed to environmental professionals, assessing the environmental pros and cons of AI.”

 

To environmental professionals, the promise of artificial intelligence is both tantalizing and troubling. At its best, AI is a powerful tool for tackling climate change, optimizing resource use, and enhancing environmental monitoring. At its worst, it threatens to accelerate ecological degradation, driven by its appetite for energy and rare resources. The environmental community must confront this paradox with nuance, scrutiny, and a willingness to shape the trajectory of this technology before it outpaces our ability to regulate its impact.

The case for AI as a force for environmental good is compelling. Machine learning models are already used to forecast extreme weather, map deforestation in real time, and track wildlife migrations with unprecedented accuracy. Precision agriculture, powered by AI, is helping farmers reduce pesticide and water use, tailoring interventions to the needs of each plot. In cities, AI systems optimize traffic flow and energy consumption, contributing to emissions reductions. Conservationists have used AI to detect illegal logging in the Amazon within hours. Similar tools in the energy sector have led to efficiency gains in demand forecasting and grid management.

Yet this optimism is tempered by the environmental costs of developing and deploying AI. Training a single large language model can emit as much carbon as five cars over their lifetimes. Data centers—the core of AI—consume vast amounts of electricity and water. With global AI usage accelerating, demand for data storage and processing power is projected to skyrocket. Some estimates suggest AI and related digital technologies could account for up to 8 percent of global electricity use by 2030. This trajectory is unsustainable without major improvements in energy efficiency or a rapid shift to renewable power.

The hardware side of AI adds more concerns. Manufacturing high-performance computing chips depends on rare earth elements and minerals often extracted under environmentally and socially harmful conditions. These supply chains are opaque, with mining operations linked to deforestation, water contamination, and exploitative labor. The environmental cost of AI is not just in carbon—it’s embedded in the silicon and metal of the devices themselves.

Moreover, AI systems can entrench unsustainable practices when aligned with the wrong incentives. Consider AI’s use in fast fashion logistics or high-frequency trading—applications that may optimize profits, not sustainability. When AI accelerates consumption instead of curbing it, it becomes a tool for ecological harm. This reflects a deeper issue: AI mirrors the priorities of those who build it. Without deliberate environmental governance, its benefits will be uneven and its harms significant.

There’s also the issue of equity. Many environmental benefits of AI—like predictive climate modeling or optimized resource use—are concentrated in wealthier regions with access to infrastructure and expertise. Meanwhile, communities on the front lines of degradation may bear the brunt of the extraction and energy demands AI requires. If unmanaged, AI risks deepening global environmental divides.

Environmental professionals are uniquely positioned to influence how AI is developed and used. This begins with advocating for transparency: we must demand clear reporting from tech companies on the energy and material footprints of AI systems. It continues with cross-sector collaboration to ensure environmental criteria are built into AI tools. And it requires a shift in mindset: AI should be seen not as a neutral instrument, but as a system shaped by values, incentives, and choices.

The environmental movement has long championed technologies that align with ecological boundaries and social justice. Artificial intelligence is no exception. It can be a powerful ally in the fight for a livable planet, but only if we insist on accountability, sustainability, and inclusivity at every stage of its development.

Intelligence Report
Author
Akielly Hu - Environmental Journalist
Environmental Journalist
Current Issue
Issue
4
Intelligence Report

It’s hard to believe that ChatGPT, the hugely popular AI chatbot, was launched less than three years ago. In a startling short span of time, OpenAI’s DALL-E, Google’s Gemini, Anthropic’s Claude, and other AI tools have spurred new and pressing conversations around the role of technology in work, education, and daily life. Workers worry that AI will displace certain jobs, as companies begin hiring for AI-assisted positions. High school and college students now regularly use ChatGPT to take notes, complete problem sets, and write essays. The technology has even sparked debate in the realm of art, as painters and writers contemplate the consequences for their livelihoods of instant content generation.

Not everyone is happy about these developments. According to the law firm Baker & Hostetler, at least 12 U.S. lawsuits have been launched against AI firms for copyright infringement, as plaintiffs from the New York Times to Getty Images claim that large models trained on text and images from the Internet violate federal law. Others raise concerns about misinformation: AI tools are not designed to produce fact-checked, accurate content, and can even make up information that doesn’t exist.

Climate advocates, meanwhile, have sounded the alarm on AI’s unknown and growing environmental impacts. Utilities from Georgia to Texas are planning to build new natural gas power plants and keep aging coal plants online to feed energy demand from data centers running AI models, threatening state climate targets. New infrastructure could strain water resources in drought-prone areas as banks of servers gobble more and more water for cooling. And while AI as a tool can serve environmental goals, such as optimizing solar power production and supercharging climate modelling, it can also aid oil and gas companies in fossil fuel extraction and production.

Clean energy experts and lawmakers have called for a range of policies to address these impacts, including requiring greater transparency from companies and even setting a moratorium on new data centers. Some researchers have said that much like the general hype around AI, the technology’s environmental implications could potentially be overblown. In a recent report, analysts at the Bipartisan Policy Center wrote that factors like manufacturing capacity limits and the sustainability of generative AI business models could ultimately constrain future growth in the sector. In the near term, however, a major uptick in AI data centers seems certain. The Trump administration has made AI development a key policy priority, and tech companies have pledged hundreds of billions of dollars in AI investments over the next four years.

Yet the public remains largely in the dark when it comes to understanding the consequences of this boom. In the United States, tech companies aren’t required to disclose how much energy and water data centers and the AI models they run actually use, making the total environmental footprint of these new systems difficult to pin down. That has left advocates, academics, and regulators alike scrambling to understand this new technology — and figure out how to minimize its harms on the environment and communities.

Despite its lofty name, there’s no consensus on what the term “artificial intelligence” actually means. In February 2023, an attorney at the Federal Trade Commission wrote that AI is essentially “a marketing term” with “many possible definitions.” Amba Kak, the executive director of the AI Now Institute, describes AI as algorithms that process large amounts of data, such as text and images, to generate predictions, scores, or rankings.

Algorithms vary in size and computational power, and it’s the largest models, like ChatGPT, that have gained the most attention for gobbling up large amounts of power. Chatbots are a type of AI called large language models, which are trained on vast amounts of text data and predict the next word in a sequence based on the input they’ve received. Other such generative models that produce images or software code work similarly.

Developing such a complex model involves huge power inputs. The research institute Epoch AI estimated that one training run for the latest version of ChatGPT consumes enough electricity to power about 20,000 U.S. homes for one day. And once an AI tool goes online, even more power is needed to actually use it. The total amount of power usage is likely significant, given that ChatGPT alone boasts an estimated 300 million users per week.

For some businesses, AI has already driven a surge in greenhouse gas emissions. Last year, Google’s annual sustainability report revealed that the company’s carbon emissions had climbed nearly fifty percent over the past five years as a result of growing electricity consumption by data centers and supply chain emissions. Google stated that reaching its climate goal of net-zero emissions by 2030 could be difficult given “the uncertainty around the future environmental impact of AI, which is complex and difficult to predict.”

A 2024 study by the Department of Energy’s Lawrence Berkeley National Laboratory projected that data centers will use anywhere between 6.7 percent to 12 percent of total U.S. electricity by 2028, compared with 4.4 percent today. Climate advocates worry that unrestrained AI growth will lead to widespread construction of fossil fuel plants to meet rising power demand from data centers. But the wide range presented by federal scientists underlines how quickly the field is evolving, and how unpredictable the future of AI sector growth remains.

It’s important to understand that AI is only a subset of data centers, the category often used by policymakers when looking at future power demand, said Tanya Das, director of AI and energy technology policy at the Bipartisan Policy Center. Data centers also host websites, cloud storage, media streaming, and cryptocurrency mining, with AI making up only about ten percent of all facilities, Das said. When we talk about growing electricity use by data centers, “Some of it is driven by AI, but much of it is also driven by our larger reliance on digital services, and on the Internet and the cloud as a whole,” she said.

In the past few years, alarming headlines about AI’s skyrocketing electricity demand have abounded. But in a February report by Das’s organization and the data firm Koomey Analytics, researchers cautioned that empirical data doesn’t support this narrative, and that in fact the future of the AI sector is highly uncertain. One reason this idea has taken root is grid forecasts from “influential management consulting and investment advising firms,” as the report puts it. Last November, for example, Boston Consulting Group said that data centers could use as much electricity as two-thirds of all U.S. households by 2030.

But Jonathan Koomey, one of the report authors, said to Canary Media that these forecasts tend to extrapolate recent growth rates into the future, which isn’t necessarily accurate. Consulting firms could also be motivated to create hype in order to attract more business opportunities: “You get attention when you make aggressive forecasts,” he said.

In reality, the future of AI power demand is far less straightforward, Das said. Although the United States is experiencing significant electricity demand growth for the first time in decades, multiple reasons are behind the spike. One is increased electrification in the transportation and building sectors, as consumers purchase electric vehicles and swap out gas furnaces for heat pumps. The growth of domestic manufacturing is another. Policies like the Inflation Reduction Act and the 2022 CHIPS Act created incentives for clean energy and semiconductor companies to move overseas factories back home. Climate change has also driven higher residential power use, as households crank up heating and cooling to cope with more extreme weather.

Das and fellow researchers estimate that data centers will constitute at most a quarter of new electricity demand by 2030, with those other sources making up the rest. For AI in particular, there are many unknown factors that will determine how much the sector will grow. One is the uncertainty of future demand for AI services. “The industry’s current growth projections are aggressive, but whether they materialize depends on businesses realizing positive economic returns from AI investments and on whether users’ concerns about accuracy and reliability can be adequately addressed,” wrote the authors of the Bipartisan Policy Center report.

Another factor is potential supply chain constraints. Data centers require chips, servers, and other equipment like backup power generators. The speed of data center buildout depends on the manufacturing capacity of a handful of companies, Das said. Nvidia, the world’s primary provider of AI hardware, for example, mostly relies on a single semiconductor manufacturer in Taiwan. Yet another unknown is advances in computing efficiency. Historically, computing systems become more and more energy efficient as technology progresses, and the same has already begun to happen with AI.

The anxiety around AI reminds Das of outlandish narratives during the dot-com boom that the Internet would eventually use up half of all global electricity. “I think we’re in a similar moment right now, where there are some alarmist projections being made,” she said. “But in all likelihood, I think this is going to be a really manageable level of [electricity] load growth.”

At the local level, however, AI is already having major impacts on the communities living near data centers. The facilities are concentrated in a growing number of hotspots across the country, including Virginia, Texas, and Georgia. Ohio, Iowa, Arizona, Indiana, and Nevada have also seen a spike in data center proposals since January 2023. A 30-square mile area in Loudoun County, Virginia, known as “Data Center Alley” is home to the world’s biggest concentration of servers, with more than 200 structures consuming roughly the same amount of electricity as the city of Boston, according to Reuters. In 2023, data centers used around a quarter of Virginia’s electricity.

Some tech companies intend to power new data centers with zero-emission technology. Microsoft, Google, and Amazon, for example, have struck deals to operate some of their data centers with nuclear power. But other firms have embraced natural gas. Last December, Meta announced a $10 billion AI data center in northeast Louisiana that will be powered with new gas generation, although the company pledged to use more renewable energy later on. In Memphis, environmental groups have decried the use of on-site gas turbines to power a data center run by xAI, a company founded by Elon Musk.

Climate advocates worry that utilities preparing for a wave of new data centers will build gas plants that lock in decades of greenhouse gas emissions and air pollution risks—regardless of how much the AI sector ends up expanding. Utilities receiving proposals for data centers have already vastly increased forecasts for how much power they’ll need to provide in the coming years. In January, a report by the consultancy group Grid Strategies found that, in the past two years, utilities have upped their load-growth forecasts for the next five years by nearly five times, in response to new data center proposals as well as manufacturing.

Meeting that power demand will almost certainly require building new gas generation. Across the country, utilities are planning to build or extend the life of nearly 20 gigawatts of gas power plants, according to Canary Media. But tech companies, looking for the best deal, often submit duplicate data center proposals to multiple utilities. Only a fraction of those facilities will ultimately get built, meaning utilities are likely planning for more power capacity than they’ll end up needing. “The data center people are shopping these projects around, and maybe they approach five or more utilities. They’re only going to build one data center,” Koomey told Canary Media. “But if all five utilities think that interest is going to lead to a data center, they’re going to build way more capacity than is needed.”

Despite potentially inflated forecasts, gas expansion to serve data centers is already underway. According to a January report by the Center for Biological Diversity, six major utilities from California to the Carolinas are planning to build at least 22 gas projects through 2033, in part to power new data centers. Georgia is one of several states that has experienced recent growth in electricity use. In 2023, Georgia Power, the state’s largest utility, substantially increased projections of energy demand, forecasting a growth of about 75 percent in total electricity generation by 2033. As a result, Georgia Power is “fast-tracking the construction of three new methane and oil-burning units at Plant Yates and delaying the closure of two of its coal-fired units, Plants Bowen and Scherer, from 2027 to 2035,” according to the CBD.

New gas plants could disproportionately harm low-income communities and communities of color already overburdened with pollution. In South Carolina, reporting by Capital B News revealed that data center proposals would reopen at least two power plants in rural Black communities. The majority of the state’s power plants are located in areas with an above-average percentage of Black residents. Discounted utility rates for data centers also mean that local residents will end up footing the bill for those new power plants, Capital B News found.

Experts warn that an unrestrained buildout of data centers could conflict with statutory climate goals. A Virginia state report from December found that data centers could double the state’s power consumption by 2033 and nearly triple it by 2040. The utility Dominion Energy has proposed building 5.9 gigawatts of new gas plants in Virginia by 2039, which could jeopardize the state’s commitment to achieve 100 percent carbon-free electricity by 2050. Last July, an investigation by the Seattle Times and ProPublica found that Washington state’s tax incentives for data centers had encouraged a spike in energy demand that threatened its goal to reach carbon-neutral electricity by 2030.

Data center servers generate a lot of heat, and one of the most efficient ways to cool them is by using water. Researchers at the Lawrence Berkeley National Laboratory estimate that data centers consumed 66 billion liters of water in 2023. By 2028, that amount could double—or even quadruple. Most of the water used by data centers evaporates, displacing water from the local sources it came from. That poses serious problems in regions that already face water shortages, says Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside, who studies the environmental impacts of AI. In places where water is scarce, like Arizona or Chile, data centers end up competing with households and farms for the resource. As companies flock to areas with cheap land, the problems could get worse. A May investigation by Bloomberg News found that roughly two-thirds of new data centers built or in development since 2022 are located in areas experiencing water stress.

AI’s water footprint is just as murky as its energy consumption, and in many instances, residents have a difficult time obtaining accurate information about their local data centers. In 2022, after a legal battle between the Oregonian newspaper and the city of The Dalles, residents there learned that Google’s data centers had grown to the point that they constituted 29 percent of the town’s total water consumption. The Oregon city, with a population of 16,000, had been experiencing drought for years.

In 2023, Ren and fellow researchers estimated that global AI demand would account for 4.2 to 6.6 billion cubic meters of water consumption in 2027—equal to the total annual water use of Denmark four to six times over. He has also examined the water usage of individual AI models. Last year, Ren’s team calculated how much water OpenAI’s ChatGPT-4 model used for a standard query. They found that generating a 100-word email using ChatGPT required a little more than a bottle of water. If 10 percent of working Americans did that once weekly for a year, the total water consumption would equal the amount used by all Rhode Island households for 1.5 days.

Ren’s team also ran the numbers for ChatGPT-4’s power use, finding that generating a 100-word email uses the equivalent of 14 LED light bulbs for one hour. If 10 percent of all working Americans generate a 100-word email once weekly for a year, that adds up to more than 121,000 megawatt-hours of electricity—equal to the power used by all households in Washington, D.C., for 20 days.

Ren is confident in these estimates, in part because his team cross-checked their numbers with research published by Microsoft, a major investor in OpenAI. But in general, research on AI’s energy consumption can still end up far from reality because the public knows little about how exactly models are designed and used in real life, he said. Companies, for example, have to ensure that these models are responding to users almost instantaneously, which could pose even higher water and energy requirements.

AI’s environmental harms extend far beyond water and energy use. Data centers also add to air pollution because they rely on diesel-guzzling backup generators, which ensure that they can keep running in the case of a power outage. Although in theory, facilities only rarely turn these on, diesel generators still emit a substantial amount of pollution: “A typical diesel generator can release 200 to 600 times more [nitrogen oxide emissions] than a natural gas power plant producing the same amount of electricity,” wrote Ren and Caltech professor Adam Wierman in a recent blog. Gas plants that power some data centers, meanwhile, also release harmful air pollutants, including fine particulate matter and nitrogen oxides. A study by Ren and Wierman found that in 2023, air pollution attributed to data centers accounted for about $6 billion in public health damages in the United States. “This is well recognized in the public health community, but it seems that it’s not being recognized in the tech industry yet,” said Ren.

In recent years, state lawmakers have led a push to require greater transparency from AI companies and establish minimum clean energy requirements for data centers. In February, in response to the Seattle Times and ProPublica investigation mentioned earlier, Washington governor Bob Ferguson signed an executive order to launch a study on the impact of data centers on energy use and state tax revenue. Legislators in Virginia introduced more than a dozen bills to address concerns around data centers’ energy and water use this year, although none succeeded. In Connecticut, lawmakers have put forth a bill that would require data centers to run on at least 50 percent renewable energy.

In New York, a new bill would require data centers to submit annual reports on their energy use and other impacts and use 100 percent renewable energy by 2040. Policymakers have also taken action at the federal level by introducing a bill last year that would mandate a study of AI’s environmental impacts. In Indiana, some advocates have pushed for a moratorium on new data centers until their impacts on residents are better understood. Policy innovators in the United States could also learn from actions taken abroad: Bloomberg News reported in mid-May that the European Union will propose a measure by the end of 2026 to curb water use in data centers.

But even in the absence of strong regulation, gas and electric companies and public utility commissions can still take commonsense measures to address the environmental implications of AI. Facing an influx of duplicate data center proposals, utilities in places like Georgia have started taking steps to more accurately predict power demand, such as assigning probabilities to projects at different stages of development. “We’re seeing utilities start to get smarter on this issue and present more accurate proposals to their state public utility commissions,” Das says. “We’re trying to encourage public utility commissions to take a measured approach to viewing these proposals.”

Tech companies, for their part, can also take charge by scrutinizing their utilities’ decarbonization plans, committing to sourcing clean energy for data centers, and using batteries rather than diesel generators for backup power, Sierra Club advocates wrote in a report last year. They can also work with utilities to push for more aggressive climate action. “Large customers are often influential stakeholders with decisionmakers,” the authors wrote. “Large customers’ voices are needed to push utilities toward a system-wide transition to clean energy.”

Policymakers can also take steps to wield AI as a tool for furthering climate goals while addressing more immediate concerns posed by the impending buildout of data centers. AI has helped scientists manage power grids to conserve energy, provide more accurate weather forecasts, and improve cooling efficiency at data centers. Or as Representative Don Beyer (D-VA) says, “While recognizing the ways AI can help us decrease emissions in other sectors and develop innovative climate solutions, we need to ensure we are being responsible with the adverse impacts it may have on our environment now.”

BRIEFING As AI expands its role in daily life, climate advocates have sounded the alarm on the technology’s growing impacts on energy, water, and public health. Yet the public remains largely in the dark when it comes to understanding the full consequences of this boom.

Cloudy—With a Chance of Renewables
Author
Alexandria Nelson - Environmental Law Institute
Environmental Law Institute
Current Issue
Issue
3
Alexandria Nelson

The rapidly accelerating rise of artificial Intelligence has captured headlines, with the focus now switching to the energy-hungry data centers that are the new infrastructure frontier. In January, tech companies and investors announced nearly $500 billion in investments for AI, with major cloud-service providers like Amazon, Google, Microsoft, and Meta leading the charge. The corresponding energy footprint of data centers is projected to grow to as much as 12 percent of national energy demand in just three years. This raises critical questions about whether the AI boom will drive innovative grid development and renewables integration—or simply strain existing systems and lock in decades of fossil fuel infrastructure.

Data centers are among the most energy-intensive buildings in the world and require vast amounts of electricity and water for cooling. Many tech companies have committed to sourcing renewable energy and are signing power purchase agreements that help fund new wind and solar projects. In theory, this influx of corporate investment could accelerate the transition to a cleaner grid.

However, one major challenge is the gap between renewable generation and data center demand. Solar and wind production fluctuate with weather and time of day. Without effective energy storage or flexible grid management, data centers will often rely on diesel backup generators.

Another concern is siting. Many data centers are concentrated in regions with cheap electricity, local tax incentives, and access to fiber-optic networks. For example, Loudoun County, Virginia, is home to the world’s largest concentration of data centers. Yet, such locations do not always align with grids that have an abundance of renewable energy. In some cases, utilities are forced to meet the rising demand with new gas plants, or by extending the retirement dates of older, more polluting facilities.

Regulators are now confronting this wave of new data center demand, and policy frameworks must evolve to meet it. Decisionmakers need access to transparent information about the energy and other resource demands of these facilities. This information is needed to create policies that can meet demand with smart investments in upgraded energy infrastructure, including storage, and matched with innovative approaches around load flexibility to integrate new data centers into a cleaner, smarter, more flexible grid.

The investment surge in AI and data centers presents both a challenge and an opportunity. ELI and its Digital Economy & Environment Program are committed to meeting these challenges with science-based solutions that power progress.

Cloudy—With a Chance of Renewables

EPA Advancing AI Amid Broad and Varied Concerns Over Use
Author
David P. Clarke - Writer & Editor
Writer & Editor
Current Issue
Issue
2
David P. Clarke

Last November, Lee Zeldin—the new head of the Environmental Protection Agency—said that a priority of his would be to “make America the AI capital of the world.” Even before Zeldin’s statement, however, artificial intelligence has been steadily advancing in EPA and government wide, starting with the brief AI in Government Act of 2020, which established an “AI Center of Excellence” in the General Services Administration to support federal AI adoption.

AI’s increasing use by government agencies comes amid concerns about how much electricity data centers require to service AI data demand, potentially exacerbating climate change. But experts suggest that AI is a double-edged sword, also potentially helping to find cheaper ways to decarbonize the economy and assess climate risks, besides having numerous other environmental applications, as EPA faces tightening budgetary constraints as it regulates a vast number of entities.

In a 2024 federal AI inventory, the agency reported 17 “use cases,” including for managing emissions monitoring and identifying lead service lines in drinking water systems. In 2022, EPA provided a $400,000 grant to a California company to develop a management platform using AI to help reduce methane-generating waste from entering landfills.

More generally, as part of a broad modernization effort, the agency is using an AI tool akin to ChatGPT that generates human-like responses to queries, EPA’s Chief Information Officer Vaughn Noga told FedScoop, a tech media publication. EPA could, for example, use chatbots to provide employees and regulated facilities with guidance in permitting or other areas.

University of Pennsylvania law professor Cary Coglianese, who writes on “algorithmic environmental governance,” notes that AI can enhance many tasks at EPA—and other agencies—that depend on predictions. Enforcement is one major area where AI is playing a role at many federal agencies, he says. With fewer inspectors than needed, the agency can better use its scarce resources by predicting which facilities are likely out of compliance and need to be inspected. One study has shown that EPA could improve its ability to catch facilities out of compliance with water pollution regulations by 600 percent if it used machine learning rather than just sending out inspectors on a random basis.

Additionally, with huge numbers of new chemicals entering the market in any given year, prediction associated with potential toxic chemicals is another area for EPA to employ AI, Coglianese says. Which chemicals need to be scrutinized closely as possible carcinogens or should be subjected to more testing and review? This is fundamentally a prediction problem. For years, EPA has been using AI as part of its ToxCast program, aiming to improve its identification of chemicals needing greater scrutiny.

In other EPA program offices, modeling of air quality, water quality, and hazardous waste flows could benefit from improved forecasting through AI. And the tech is increasingly being used as part of the statistical toolkit for scientific research of all kinds. EPA’s AI use in this area is not surprising, because the agency conducts or sponsors substantial amounts of environmental research, Coglianese adds.

The Artificial Intelligence Environmental Impacts Act of 2024—introduced by congressional Democrats last winter—recognizes AI’s numerous potential positive environmental impacts. For example, the technology could help optimize energy efficiency, develop renewable energy, enable discovery of new materials, and automatically monitor environmental changes. But, in addition to greatly increasing energy and water consumption, AI could also accelerate high-pollution activities, including electronic waste. Given such concerns, the bill would require EPA to study AI’s environmental consequences and would create a voluntary federal system for reporting impacts.

Coglianese cautions that “AI is not some magic bullet,” despite its numerous beneficial applications. AI tools can be part of an overall strategy to make agencies like EPA more effective and efficient, and perhaps even lessen the various paperwork burdens imposed on the public, he says. If not used with care, however, AI tools can make errors—sometimes ones with great consequences. So, as EPA moves forward, increasingly experimenting with and ultimately relying on AI tools, it will need to do so responsibly, validating these tools and providing ongoing oversight of their performance, much like a manager would oversee an employee, he says.

To that end, EPA’s Office of Inspector General last July initiated an inquiry into the agency’s use of AI, implementing Biden administration executive orders to promote trustworthy, safe federal AI use. Thus Administrator Zeldin’s AI ambitions would definitely have relevance for the agency he now heads.

EPA Advancing AI Amid Broad and Varied Concerns Over Use

Managing the Energy and Climate Impacts of Artificial Intelligence
Author
Bob Sussman - Sussman and Associates
Sussman and Associates
Current Issue
Issue
6
Bob Sussman

The explosive growth of artificial intelligence following the debut of OpenAI’s ChatGPT in November 2022 has ushered in a new era of technological possibility. But AI has an under-appreciated downside that, if not addressed, could jeopardize our clean energy and climate goals.

AI is far more energy intensive than our traditional computing, communications and data-processing infrastructure. Managing the complex algorithms, search functions, imaging, voice recognition, and machine learning capabilities of AI requires a dramatic increase in power hungry servers and information-storage hardware as well as vast amounts of cooling water to prevent overheating of equipment.

Data centers are large facilities that concentrate servers and other equipment at a single location and may require up to 1,000 megawatts of power. They have already proliferated to support the growing sophistication and variety of internet-based services. But AI applications are likely to require many more data centers, accompanied by dramatic increases in energy consumption. The International Energy Agency recently projected that global data center electricity demand will more than double by 2026. The United States is expected to account for half of the world’s new data centers and a corresponding share of electricity-load growth.

The scale of data center energy demand is already immense. According to the Electric Power Research Institute, in 2023, data centers accounted for 4 percent of total U.S. power consumption, equivalent to the annual electricity use of 14 million households. EPRI modeling projects that, by 2030, data centers could consume 9.1 percent of U.S. electricity output under a scenario of rapid AI adoption. This growth in energy demand would disproportionately impact states where data centers are concentrated. In Virginia, which has the largest number of centers in the country, continuation of the current rate of growth could increase the data center share of the state’s electricity load to almost 50 percent in the next six years. This could dramatically reshape the commonwealth’s electricity grid, strain resources, and drive up the cost of power. Other states seeing a boom in data centers, such as Texas, California, and Illinois, face similar impacts.

Where will this vast amount of electricity come from and how will it affect greenhouse gas emissions?

Despite continued improvements in demand-side energy intensity, U.S. electricity consumption has recently been growing in response to increased electrification of vehicles, household appliances and manufacturing. Growth in energy demand from AI along with cloud computing and expanded use of the internet will accelerate this trend.

That will make it more challenging for the United States to meet its GHG reduction targets, which have traditionally assumed reductions in electricity demand and lower GHG emissions as a result of energy efficiency investments. If the nation is producing and using more power, it will not only have to replace a significant portion of existing fossil fuel generation with non-emitting sources but assure that no new fossil resources come online to meet the added demand. This is a tall order, given that 54 percent of current U.S. electricity supply comes from coal and natural gas generation.

Data centers can be developed and connected to the grid in two years but new transmission and generation can take five or more years to come online. Thus, where construction deadlines are tight, power supply choices for data centers may be based on short-term expediency, with no consideration of longer-term energy and climate impacts

Leading technology companies are making massive investments in AI but there is no national policy framework for data center development that reconciles the dual imperatives of U.S. technology leadership and climate change mitigation. The public even lacks basic information about the number of new data centers being planned, their locations, the energy they will need, and anticipated power sources. By default, decisionmaking is largely in the hands of power generators, data center owners, and local planning bodies who may make deals behind closed doors based on economic and political pressures.

On the positive side, the companies with the largest stake in AI (Google, Microsoft, Apple, Amazon) are publicly committed to green energy and fighting climate change. Microsoft recently announced a historic investment in reviving the Three Mile Island nuclear plant to power its data centers. Other internet giants may likewise decide to bankroll new sources of clean energy to protect their AI franchises and green credentials. Elevating the dialogue about data centers to the national level will not only provide much-needed transparency but encourage partnerships between technology leaders, electricity providers, and governments that lower the carbon footprint of the burgeoning AI sector.

Managing the Energy and Climate Impacts of Artificial Intelligence

Are We Endangered Yet? Artificial Intelligence and the Human Species
Author
Oliver Houck - Tulane University
Tulane University
Current Issue
Issue
2

Five years ago, Google cofounder Sergey Brin said, “You should presume that someday we will be able to make machines that can reason, think, and do things better than we can.” That someday had already arrived in 2014 when this statement was made, and it is not standing pat. Artificial intelligence, or AI, had beaten the best chess players in the world and gone on to top the grandmaster of Go, a uniquely complicated and intuitive game, which it opened with a move so unorthodox that its opponent left the table, flustered. Fifty moves later the computer won. It had created its own strategy.

Just as recently, two Facebook chatbots were programmed to conduct negotiations over small consumer items (hats, baseball bats), each seeking the best bargain, and soon began using a language of their own. Facebook shut them down. About the same time, Google developed a translation tool between English and other languages and found, likewise, that its program had developed an approach that bypassed English altogether. Google liked the result, and kept it going. Both Facebook and Google dialects had become incomprehensible to humans. What could possibly go wrong here?

Perhaps the most user-friendly take on this question is a TED Talk by Nick Bostrom, a philosopher and technologist, entitled, “What Happens When Our Computers Get Smarter Than We Are?” You can find it easily on YouTube, and it is worth watching. Bostrom posits that machines can preserve our humanity, or they may just leave us behind.

He begins with the speed of change. If, as he imagines, our species appeared on Earth only one year ago, then the industrial era began within the last two seconds and the first computers appeared in the latest blink of an eye. This progress was driven largely by human intelligence. The human driver is now challenged by computer “superintelligence” that learns, and then decides, on its own. When Bostrom asked two groups of AI experts at which point would there be a 50 percent probability for computers to perform “almost any job as well or better than humans can do,” the median answer was between 2040 and 2050. We do not yet know, of course, but surprises come daily.

The reasons are simple physics. While our biological neurons fire at about 200 times a second, computer signals can travel at the speed of light. Our brains are furthermore couched inside one small cranium, while computers can be housed in warehouses, in city blocks of buildings. All of this potential lies dormant, waiting to be tapped. When it is, we will see a superintelligence explosion, and our own fate may depend on what it does. “Think about it,” Bostrom proposes, “machines will be better at inventing than we are, and doing it on digital time scales.” At which point the possibilities are unfathomably large.

For Bostrom, this raises two concerns. One is that if we created a “really powerful optimization process” for this superintelligence in order to obtain X, “You better be sure that X includes all we care about.” King Midas asked for the golden touch, and got it. He put his hand on his daughter and she turned to gold, and then his food became cold metal under his touch. (Query: The point is well taken, but is programing “all we care about” even possible?)

A second concern is that, once engaged, there may be no off-ramp. We of course could program one, but a yet more intelligent machine could find a work-around by hook or crook. Hackers and entire countries will try. Sooner or later, he says, “The genie will get out of the bottle.” Then what? We are not at the point of knowing this either. Meanwhile AI is going gangbusters, on the lips and in the labs in institutions far and wide, inventing more boldly every step. CRISPR, IoT, Blockchain etc., none with brakes and no stop signs in sight. Asked recently whether limits of some sort might be necessary, one enthusiast replied “I certainly hope not!”

Bostrom is not alone. His book on the subject, Superintelligence: Paths, Dangers, Strategies, was followed by inter alia Max Tegmart’s Life 3.0: Being Human in the Age of Artificial Intelligence, and now Brett Frischmann and Evan Selinger’s Re-Engineering Humanity. All three, and a growing body of articles as well, describe the same phenomenon. Which, as with humans, is a bag of mixed blessings that we started opening years ago.

One is the simple fact of work. Technology began changing humanity from the days of the wheel and the plow, with largely beneficial outcomes. Industrial technology brought more benefits, including steady employment, which held true until the machines began erasing the workers, but there remained a demand for minds to plan, evaluate, and make decisions. With advances in AI even these functions yield, shrinking the work force further and dividing its rewards yet more starkly between haves and have-nots. In the taxi industry, GPS has removed the need to know a service area, followed by Uber, Lyft, and virtually anyone with a driver’s license, to be followed next by automatic vehicles that remove the driver altogether. Who wins in this scenario?

At the same time, and paradoxically, the nature of work has been dumbed down as well. The repetitive labor of the assembly line (called Taylorism after its founding father), where not entirely displaced by machine, is now reinforced by AI calculations of worker efficiency (called Digital Taylorism) through every step in the supply chain. Amazon employees report “feeling like robots,” their time so scrutinized that they urinate in bottles rather than walk to the bathroom. A related article begins, “Instead of asking ‘Are robots becoming more human?’ we need to ask ‘are humans becoming more robotic?’” It is not really a question. A recent cartoon features a robotic-looking recruiter telling several prospects, “You’ll love it here, it’s a totally dehumanizing environment.”

Another impact is also a commonplace. We are guided through the day by disembodied voices (“turn right at next stoplight”), and return home to tell the equally disembodied Alexa what to do. Digital media and a suite of evolving platforms attract us like moths to the blue screen, students with smart-phones walk across campus like zombies, families at dinner stare at their mobile devices (not even the same TV show), a new way for humans to be. While these devices are indisputably handy (and at times can save lives), they enable the aggressive among us to predate at will, and the more retiring of us to retreat to electrons for the experiences of life itself, severed from the kind of physical contact that humans have relied on for tens of thousands of years.

Artificial intelligence leaves us as well at the mercy of those who, armed with a computer-load of data about everything we have ever purchased, photographed, emailed, “liked,” or done, will sell us yet one more thing. Digital face recognition now tells department store sales staff about your buying history and financial profile before you even get to the counter. With AI diagnostics the largest corporations in the world have the ability to know us better than we know ourselves — and to act on it.

More problematic still, and reaching root principles of democracy, AI allows any entity and any country to target us, individually, with rumors and falsehoods to which it deduces we are susceptible, and that we will then carry forward like articles of faith. To wit: the late presidential election. As NPR’s The Hidden Brain reveals weekly, humans are driven by habits and emotions, and whoever can target them most effectively will win the day. AI will run impeccable political campaigns.

At this point, perhaps only as a salve, most treatments of AI turn hopeful: in return for a cornucopia of benefits humans will find a way to keep control, to form a cooperative relationship with its new (and vastly more capable) partner. A recently seen wall-size ad in the Houston airport reads “Let’s write the future with Robots that have what it takes to Collaborate!” Harvard University scholars have taken the same approach, infusing ethics into AI-related courses toward “a future that combines the speed and statistical prowess of intelligent computers” with “innate human talents. . . the frontier of AI systems.”

I am not so sanguine. We have already embraced the gods of easy information, and in turn yielded the field of thinking about it. My classes produce papers steeped in web-drawn data but short on analysis. My nephew boasts that he doesn’t have to know a thing, he can pull it up on Google — which he can, including square roots and the Fall of Rome. Neither am I sure that we humans care about yielding to AI any more than we do about yielding personal data to Google, which makes a fortune selling it to others, who make fortunes using it to target us. We have become a commodity, and we seem content with the bargain.

At road’s end, what is it that makes us human? For several centuries we thought it was the ability to reason but we are now creating systems that out-reason us hands down. (Apparently they also create excellent memes.) Which leaves the human heart. It still exists, but can it marry the machine? Inevitably, it seems, we will find out.

In the meantime, a young scientist in China, for the best of reasons, practiced gene editing recently on two human embryos susceptible to HIV — and met a storm of criticism. Senior peers convoked a meeting and pronounced against the practice, stepping away from the fire. But only for the moment. It was too soon, they said, the practice was premature; it was not the wrong thing, just the wrong time. And when the right time comes? This, inevitably, we are going to find out too.

To be sure, none of this has happened yet but it is difficult to imagine, given human ingenuity and the stakes involved, what limits may even be possible. It is also difficult to imagine, for the first time in the human experience, just how we ourselves will look, think, and act (and reproduce) one century from now. Is there a point down the road when, like bringing water to a boil, we stop being Homo sapiens and start being Homo something-else?

And will we care when it arrives?

Oliver Houck on artificial intelligence and our species.

ELI Report
Author
Laura Frederick - Environmental Law Institute
Environmental Law Institute
Current Issue
Issue
3

Artificial Intelligence: Will algorithms benefit the environment? Report points the path to beneficial uses of computerization

Artificial Intelligence is changing how our society operates. AI now helps make judicial decisions, medical diagnoses, and drives cars. AI also has the potential to revolutionize how we interact with our environment. It can help improve resource use and energy efficiency and predict extreme weather.

AI can also exacerbate existing environmental issues. For example, software manipulation of over a half million VW diesel automobiles created one of the largest environmental scandals of the past decade.

ELI’s Technology, Innovation, and the Environment Program was developed to better understand the environmental impacts and opportunities created through emerging technologies and their underlying innovation systems

When Software Rules: Rule of Law in the Age of Artificial Intelligence, a new report from program director David Rejeski, explores the interaction between AI and the environment and the need for some form of governance to ensure that it is deployed in a manner that is beneficial.

“As environmental decisionmaking becomes internalized into AI algorithms, and these algorithms increasingly learn without human input, issues of transparency and accountability must be addressed,” said Rejeski. “This is a moment of opportunity for the legal, ethical, and public policy communities to ensure positive environmental outcomes.”

“When Software Rules” offers the government, businesses, and the public a number of recommendations they can use as they begin to consider the environmental impacts of AI.

The report discusses concerns with AI systems. These include unintended consequences, such as race bias in algorithms, and the common difficulty of understanding the logic of deep-learning systems and how they come to decisions. Other sources of concern include issues like algorithms functioning on the basis of correlation without proving causality; legal liability issues; lack of privacy from data mining; and the risk of hacking.

Some form of governance over AI systems is necessary to address some of these issues, and ensure responsibility, including taking environmental considerations into account. Semi-formal governance systems may include voluntary codes outlining engagement with AI research or self-governance by institutions looking to create “ethical” AI systems. A more formal governance system may include legislation protecting consumers from faulty algorithms.

ELI provides a number of recommendations as to how AI governance can include consideration of environmental impacts. Suggestions are provided for all stakeholders: the private AI sector, programmers, governments, and the public.

For example, the private AI sector can develop research teams that include evaluation of the socio-environmental impacts of their algorithms and assemble stakeholder groups to develop guidelines for sustainable development of AI.

Programmers can increase the transparency of their algorithms so users can understand why decisions are being made, and they can increase their commitment to prioritizing environmental benefits.

Governments can ensure AI systems are powered by renewable energy to meet the energy demand of these new systems and create incentives for the development of AI that tackles environmental issues.

Members of the public can advocate for systems that promote their cultural norms and values, including environmental protection, and they can make responsible consumer choices by supporting AI companies that are transparent and environmentally conscious.

As AI governance becomes a societal expectation and is later bound by semiformal or formal contracts, the environment must be a central focus in AI discourse and subsequent laws and policy, the report concludes. ELI will continue to provide guidance on how these goals can best be achieved.

“When Software Rules: Rule of Law in the Age of Artificial Intelligence” is available for free download at eli.org/research-report/when-software-rules-rule-law-age-artificial-intelligence.

Al Moumin awardees highlight promise of peacebuilding efforts

ELI co-hosted the annual Al-Moumin Distinguished Lecture on Environmental Peacebuilding, a hallmark of the Institute’s Environmental Peacebuilding Program. Co-sponsored by the Environmental Law Institute, American University, and the United Nations Environment Programme, the lecture recognizes leading thinkers who are shaping the field of environmental peacebuilding and presents the prestigious Al-Moumin Award. The series is named for Mishkat Al-Moumin, Iraq’s first Minister of Environment, a human rights and environment lawyer, and a Visiting Scholar at ELI.

This event, now in its fifth year, honored Ken Conca and Geoff Dabelko for their outstanding contributions to the field.

Conca is a professor of international relations in the School of International Service at American University. Dabelko is a professor and director of environmental studies at the Voinovich School of Leadership and Public Affairs at Ohio University; he is also a senior advisor to the Environmental Change and Security Program of the Woodrow Wilson International Center for Scholars.

Fifteen years ago, Conca and Dabelko published Environmental Peacemaking, a rejoinder to grim scenarios foreseeing environmental change as a driver of conflict. Conca, Dabelko, and collaborators argued that, despite conflict risks, shared environmental interests and cooperative action could also be a basis for building trust, establishing shared identities, and transforming conflict into cooperation.

In their lectures, Conca and Dabelko reflected on the evolution of environmental peacebuilding research since their work began in the early days of the post-Cold War era, their seminal publication, and their long-term engagement with policymakers and practitioners applying these insights around the world.

Their work transformed, and continues to have a profound impact on, the way scholars and practitioners approach and understand the intersection of environmental protection, national security, and human rights.

Conca and Dabelko’s work is also the heart of ELI’s Environmental Peacebuilding Program: As the world experiences increasing pressures on its natural resources and climate, countries must learn to peacefully resolve resource disputes and make the environment a reason for cooperation rather than conflict.

Team travels to Indonesia to prep for judicial education course

Legal authorities are now available in Indonesia to enable civil society and the government to file claims to hold responsible parties liable for damages and the restoration of natural resources.

Through an ELI workshop and curriculum developed in conjunction with the Indonesian Center for Environmental Law and others, judges will learn best practices and methods for implementing new legal processes, including environmental damage valuation and restoration and compensation, tailored to the specific needs of the host country.

The goal is to promote environmental accountability through judicial enforcement. Ultimately, the benefits will include reduced deforestation and greenhouse gas emissions, as well as improved biodiversity and quality of life for vulnerable communities.

ELI recently traveled to Indonesia to help prepare for the week-long workshop to be held this summer. Staff met with various local stakeholders to gain background on topics like injury quantification, restoration and compensation, and settlement. ELI was also able to hear from judges which topics are most important to cover.

ELI staff held focus groups with ICEL as well as the Ministry of Environment and Forestry and Ministry of Justice and Human Rights, using an oil spill case to discuss valuation, settlement, and transboundary issues.

ELI and ICEL also held focus group discussions with the Supreme Court of Indonesia’s Environmental Working Group and Center for Training and Legal Research. The discussion included a presentation on the needs assessment by ICEL and a presentation on the comparative study of valuation, compensation, and restoration practice in several countries.

ELI’s judicial education program is a hallmark of the Institute’s work. With in-depth consultations, custom design of programs to meet the specific needs of the particular jurisdiction, and success in creating institutional capacity, the lessons learned continue to be applied after the education is completed. Since 1991, ELI has developed, presented, and participated in more than 40 workshops on critical topics in environmental law for more than 2,000 judges from 27 countries.

ELI met with a local NGO and members of the government to prepare for workshop on judicial enforcement of environmental laws.

Field Notes: Water summit showcases ELI legal expertise

ELI President Scott Fulton and Director of ELI’s Judicial Education Program Alejandra Rabasa traveled to Brazil to participate in the World Water Forum. The forum is the world’s biggest water-related event and is organized by the World Water Council, an international organization that brings together all those interested in the theme of water. Supreme Court justices from over 50 counties were in attendance to shine a light on the importance of rule of law in advancing water quality goals.

ELI hosted a day-long conference on Environmental Law In Practice in Detroit. The conference presented a spectrum of emerging legal issues with a focus on environmental justice. It introduced a wide-ranging exploration of career opportunities in the EJ field. This event featured environmental law experts on panels including Careers in Environmental Justice, Energy & Climate Justice, Water Access and Affordability, and Urban Air Quality.

Agustin V. Arbulu, executive director of the Michigan Department of Civil Rights, delivered opening remarks. Keynote addresses were given by Mustafa Santiago Ali, senior vice president of climate, environmental justice and community revitalization, Hip Hop Caucus, and Charles Lee, senior policy advisor, EPA Office of Environmental Justice.

Members of the public came together with lawyers, students, academics, civil rights and social justice advocates and activists, and community groups to discuss pressing issues.

The Conference was co-sponsored by Wayne State University Law School’s Transnational Environmental Law Clinic and Environmental Law Society, University of Chicago Law School’s Abrams Environmental Law Clinic, the American Bar Association’s Environmental Justice Committee of the Section of Civil Rights and Social Justice, and the Great Lakes Environmental Law Center.

Director of the Ocean Program Xiao Recio-Blanco moderated a webinar on Current Developments on U.S. Fisheries Policy. The Trump administration’s approach to fisheries management seems to constitute a significant policymaking shift. Recent decisions such as extending the Gulf of Mexico season for red snapper or overturning a decision by the Atlantic States Marine Fisheries Commission that would have cut New Jersey’s recreational quota for summer flounder seem to go against NOAA’s traditional approach of situating scientific information at the center of fisheries decisionmaking.

The webinar discussed these and other recent developments and assessed the direction U.S. fisheries policymaking may take in the future.

ELI and the China Environmental Protection Foundation held the first training session to build the capacity of public interest groups and prosecutors in China since receiving its temporary registration for an environmental protection-related project from China’s Ministry of Environmental Protection and the Beijing Bureau of Public Security.

The session was held at Tianjin University Law School. A total of 53 participants — comprising representatives from public interest groups, environmental courts, prosecutors, and environmental protection bureaus — attended from 16 provinces, autonomous regions, and cities.

Report on perils, promise of artificial intelligence.