Who Bleeds From Introducing Cutting-Edge Nuclear Reactors?
Author
Craig M. Pease - Former Law School Professor
Former Law School Professor
Current Issue
Issue
6
Craig M. Pease

Nuclear micro reactors have been deployed on U.S. aircraft carriers and submarines since the 1950s. Recently, considerable progress has been made on updated micro reactors for commercial generation of electricity. As a proof of concept example, the Defense Logistics Agency recently awarded a contract to the private corporation Oklo to build a micro reactor at Eielson Airforce Base in Alaska. Research continues forthwith, with about 75 ongoing U.S. projects to research, develop, and deploy next-generation reactors.

All the hullabaloo about advanced reactors notwithstanding, nuclear electricity in the United States is in steep decline. There are about 90 classic light water nuclear reactors operating in the United States today, producing some 20 percent of the electricity for the grid. They are immense facilities, with the familiar giant curved concrete cooling towers. The Nuclear Regulatory Commission licenses of about 20 percent of this fleet will expire before 2030, if not renewed. About 40 reactors have already been decommissioned. By contrast, only a handful of new ones are being built. Most recently, this past summer, after over a decade of construction and at a cost north of $30 billion, a new unit at the Vogtle plant in Georgia was brought online.

The situation is even worse than declining fleet. Essentially no uranium is currently mined in the United States. The world’s largest supplier is Russia. And it is not just mines. The bankruptcy of Westinghouse, a manufacturer of reactors and their components, is an exemplar of the widespread degradation of the institutional network needed to support commercial reactors.

A nuclear renaissance will entail rebuilding nuclear logistics, manufacturing capabilities, and supply chains. That is not the realm of science or engineering; it requires a qualitatively different sort of expertise that is the providence of government, the military, and business—and all working together. A nuclear renaissance will require a rebirth of this institutional capability.

Though new nuclear reactor technologies are diverse, with inscrutable names (HALEU fuel and molten salt cooling), the unifying underlying idea is to dramatically reduce reactor size. Micro-reactor technology is not a single innovation, but rather a complex series of interlocking advances, all of which must work together. New reactors typically operate at higher temperatures, which in turn requires novel fuels. These include higher concentrations of fissionable uranium isotopes, and ways of altering the paths of subatomic particles released during fission. Also needed are different coolants and different materials to contain the reaction.

On a cost-per-kilowatt-hour basis, micro nuclear reactors are not currently competitive with light water reactors, according to a 2021 review in the scientific journal Progress in Nuclear Energy. That said, over many decades, reactor miniaturization should eventually reduce cost. For manufacturing generally, the more units produced, the lower the per unit cost. A classic example is the dramatic reduction in per unit automobile cost—and increase in comfort and safety—as manufacturing ramped up over the last century.

Most all novel nuclear reactor technologies will either fail in the research lab or fail commercially. When autos were first invented, there were hundreds of manufacturers. Eventually economic competition, and repeated innovation cycles, pruned that down to a handful of makers. On the cutting edge of technology, there is a lot of bleeding.

Because of faster innovation cycles, and the knowledge that comes with making more mistakes as more units are manufactured, I am confident we will see increased safety of any one nuclear reactor, considered standalone. But perhaps paradoxically, the overall risk of a nuclear accident may well increase.

Commercial success will entail scattering lots of small nuclear power plants across the landscape. That in turn will require uranium at more locations, and more people, institutions, mines, manufacturing facilities, supply chains, logistics, and knowhow. Even now, we have intractable systems-level problems; for example, no long-term nuclear waste facility.

It is impossible to predict the systems level harms of micro nuclear reactors. In the early 1900s, nobody had the vaguest idea that autos would eventually conjure up land use changes that devastated traditional downtowns and lifestyles, air pollution that still causes millions of deaths annually worldwide—and carbon dioxide emissions that cause climate change. Micro nuclear reactors will inevitably bring social, economic, and environmental harms that we do not now comprehend.

Micro nuclear reactors may or may not replace oil and coal. But if they do, we will reap not only the benefits of reduced carbon emissions and air pollutants but also currently unknown harms and upheavals within human society.

Who Bleeds From Introducing Cutting-Edge Nuclear Reactors?

Caution Without Repressing Innovation
Author
Gary Marchant - Arizona State University
Arizona State University
Current Issue
Issue
3
Parent Article
Gary Marchant Headshot

Quantum computing is often perceived as the next “next big thing.” But it is not unique—other emerging technologies likewise portrayed as the next big thing include 3D printing, brain-computer interfaces, blockchains, nuclear fusion, CRISPR gene editing, and artificial general intelligence. This impending wave of technology revolutions comes in the wake of society assimilating, or in the process of assimilating, other technologies such as genetically modified foods, synthetic biology, the internet and social media, smart phones, electric vehicles, and the internet of things.

While these relentless waves of technology innovation may overwhelm many citizens and our government resources, they do provide us with important lessons for technology governance. One of the most important is the critical role of timing—relevant for technology proponents, technology critics, the general public, and government.

For technology proponents, the tendency for hype is a key timing lesson. From the seemingly perpetual vision of flying cars, to the promise of genetically customizing our drug prescriptions, over-optimistic promises undermine confidence in this latest tech and its developers. In fact, both the technology and regulatory frameworks for those technologies continue to progress, but at a much slower pace than predicted.

This problem is best summarized by futurist Roy Amara’s adage that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” The lesson for quantum computing is that the benefits of the technology are likely to be more modest and delayed than the initial extravagant claims.

Technology critics can commit the opposite error, prematurely predicting for new tech worst-case consequences that never manifest. For example, critics of the first genetically modified organisms predicted that the modified bacteria may irrevocably alter the chemistry of the upper atmosphere and cause a catastrophic ice age. Critics of nanotechnology claimed that nanomaterials were the “new asbestos” and called for a worldwide moratorium on all uses.

None of these fearsome predictions came true, but they did result in unnecessary technology delays or costs. Conversely, as the European Union documented in its report “Late Lessons From Early Warnings,” early indications of real risks are often ignored, to everyone’s ultimate detriment. The lesson for quantum technology is the need for carefully discerning real from speculative risks, and dividing evidence-based from worst-case predictions for an emerging technology.

The public response to new technologies also evolves over time. While public engagement is critical for both pragmatic and philosophical reasons, it is human nature to resist change, so the public usually opposes new technologies at the outset. As Isaac Asimov noted, “All through history there had been resistance—and bitter, exaggerated, last-ditch resistance—to every significant technological change that had taken place on Earth.” In some cases the initial opposition dissipates with familiarity, as was the case with blood transfusions, in-vitro fertilization, microwave ovens, and hopefully electric vehicles. In other cases, such as nuclear energy and GM foods, the initial opposition persists or even grows over time, even if contrary to expert opinion. So, for quantum technologies, it will be critical to engage the public, but with the knowledge that initial resistance is expected, but may not reflect long-term opinion.

Finally, for government, the challenge is to be cautious without repressing innovation. Many technology harms should have been predicable and preventable with anticipatory governance. But not all risk can be anticipated ex ante, and we also need governance systems that are agile and foster resilience, to address real risks when they arise, rather than a parade of phantom risks hypothesized before a technology is deployed or even developed. Quantum computing technology provides an opportunity to learn and apply these lessons in timing.

We Need Capitalism to Save Our Planet
Author
Shi-Ling Hsu - Florida State University College of Law
Florida State University College of Law
Current Issue
Issue
3
Tree Raining Money

Capitalism is under fire. Whatever people think capitalism is, increasing numbers are turning against it. Environmentalists see greed and pollution everywhere, seemingly working in concert to impoverish the entire Earth. Lately, too, some on the political right seem to be having second thoughts, as they grow increasingly irritated with investment firms offering funds focused on environmental, social and governance considerations, or ESG factors. For detractors of all stripes, a growing concern is that there is, in some vague way, “too much capitalism.”

We do not have too much capitalism. In fact, what we need in this moment of environmental crisis is, in a sense, more capitalism. What is needed now is a new, muscular, and re-directed capitalism oriented toward new industries and technologies that improve environmental conditions. The cornerstone of capitalist economies is market prices, and what is needed is a new set of prices that reflect all costs, including environmental harm. These taxes would include, for example, a carbon tax, a cattle head tax, a nutrient water pollution tax, and other levies that reflect some social or environmental damage resulting from human activity.

Proposing environmental taxes is nothing new. The idea of a Pigouvian tax—one levied on a unit of pollution—has been around for almost a century, even if the practice of environmental taxation has endured a very slow uptake. But this article makes a different point. The need for environmental taxes is not so much, as previous writers have argued, to reach some optimal level of pollution, balancing marginal costs and marginal benefits. The point is that capitalism is a powerful, transformative, and disruptive force that is steered by market prices. Guiding capitalist economies in a new, sustainable direction requires changing market prices. Environmental taxes are the way to change market prices, and hence to change direction. Globally, there are few things more entrenched than fossil fuel industries; dethroning them will require something powerful and disruptive, more so than even governments. What else is out there, other than capitalism, that is up to the task?

The history of capitalism is one of disruption. One need only briefly consider what the internet did to travel agencies, what Amazon has done to retailing, and what social media has done to news media, to appreciate the power of market forces in a capitalist society. These changes are clearly not unalloyed causes for celebration. In fact, one might even argue that humankind might be better off without some of them. But they are illustrations of the disruptive power of capitalism. In all of these cases, a change in prices—sometimes a modest one—was enough to cause massive change. Investors, sensing opportunity from these price changes, quickly pumped money into ventures to exploit them. Backed by investors, these companies quickly and brutally moved to displace incumbent businesses.

This kind of disruption has already occurred in the energy sector in the United States, in a way that has actually produced some climate benefits: the advent of hydraulic fracturing. Commonly called fracking, it is the cracking of geologic formations to extract previously unreachable bubbles of oil and natural gas. The result was a significant decrease in their prices. For electricity generation, inexpensive natural gas rapidly replaced coal, a much more greenhouse-gas-intensive fossil fuel. According to the U.S. Energy Information Agency, more electricity was generated in 2019 by renewable energy sources than by coal, the first time that was true since 1885.

It is also worth noting that in the roughly 35 years before fracking, the Environmental Protection Agency had been steadily tightening regulation of air pollution from coal-fired power plants. By no means where those efforts wasted or undesirable. Cost-benefit analyses of the Clean Air Act show that the health benefits of regulation were orders of magnitude greater than the compliance costs. There is no telling how polluted the world would have been without the law. But the Clean Air Act was never going to be an effective way to phase out coal production and combustion in the United States. Because of the inherently political nature of regulation, it was never going to be as powerful of a force as market prices.

Reducing coal consumption in the United States is only an intermediate step, however. Climate change is more serious than cautious, conservative scientists have predicted, and humankind cannot rely on natural gas for very long at all. Natural gas must serve as a very short bridge to a renewable energy future. Moreover, there are still roughly 50,000 jobs in the United States concerned with the extraction, distribution, and processing of coal. At the risk of sounding callous, these jobs must go. Society can no longer pay people to continue to engage in such a destructive industry. There are ameliorative steps that can and should be taken. A federal job retraining and relocation program, the Trade Adjustment Assistance Program, is in place to help workers affected by international trade, and it can be extended to coal workers. In fairly short order, this remedial help will have to extend to workers in the natural gas and petroleum industries as well.

It is wrenching to think about people out of work, and the collapse of communities and social networks. It is just difficult to transition lives, and to fully restore the well-being of displaced workers. But paying people to worsen climate change is an unaffordable folly.

How did we get to this point of environmental crisis? People blame capitalism for the profit motive that seems to have driven the fossil fuel industries to their excesses. But that is mistaken thinking. Capitalism is a system of economic governance, not political governance. A working definition is in order: capitalism is a system of private property whereby ownership of the means of production can be separated from the means of production itself. Capitalism is so powerful because it provides a way of linking money and ideas, sometimes far-flung money and speculative ideas. Capitalism is an extremely efficient way of allocating resources. It is a system that places market economy prices at its conceptual center, rather than straining against them, as socialist economies do. Prices are so ubiquitous and so remorseless in their assessment of scarcity that they provide instantaneous signals of value. Markets fail, of course, sometimes spectacularly, tragically, and globally. But capitalism only leans into what appears to be a default for human nature: the weighing of alternative uses of resources, and their conversion into a metric: a price. How the social consequences of capitalism are managed is determined by policy choices. Capitalism is an exceptionally powerful engine, but it still needs to be steered by political choices.

What are the political decisions that have given rise to the current environmental crises? One choice that almost all countries still make, almost unconsciously and to varying degrees, is to favor short-term economic growth over long-term sustainability. No one would revel in saying it that way, but the preference is unmistakable. Climate change was first pronounced publicly as an environmental threat by President Lyndon Johnson, in a 1965 address to Congress. Some may blame the climate misinformation spread by oil companies and Republicans, but the blame for fossil fuel supremacy and the climate crisis should be widely shared. Democrats and their voters clearly hold the higher ground on environmental issues, but public opinion polls, and the everyday choices made by both red and blue voters, suggest that environmental concerns are superficial, still falling well below kitchen table economic concerns. President Biden, facing worrying political signs ahead of the 2022 midterm elections, released 180 million barrels of oil from the Strategic Petroleum Reserve, because of concern over high gas prices. It is no exaggeration to say that American democracy was on the ballot in last year’s midterm elections, and no exaggeration that keeping gasoline prices low and keeping up consumption (and concomitantly, emissions) seemed still to be a significant national priority.

Having said that capitalism is not the cause of the manifold environmental crises facing humankind, it is important to acknowledge that it has, due to its enormous power, amplified the political choices steering it. In fact, it is fairly important to explicitly identify a critical weakness: while capitalism is not the root of the world’s ills, the capital procured is, in fact, a substantial cause.

The reason for that is this: while capitalism is a system of economic governance that has as its goal the movement of resources to their most profitable use, the capital formed in capitalist ventures—the power plants, the pipelines, the refineries, the offshore oil rigs, and the vast fossil fuel-centered energy infrastructure—are potentially working against this goal. Expensive brick-and-mortar assets, once paid for, create an enormous incentive to maintain the current mode of operations. The status quo may direct resources not to their highest and best uses, but to existing ones. Pricey assets create their own political economy. The result of investment in expensive capital is rigidity in production, an antithesis of capitalism. Joseph Schumpeter identified “creative destruction” as the defining feature of capitalism. The whole point of capitalism is to always have a competition for precious resources, so as to ensure optimal deployment. Capitalism is not the entrenchment of legacy industries.

The global fossil fuel-centered economy originated in the development of the steam engine, but widespread production owes a good deal to the first U.S. tax subsidies for oil, passed in 1913. The subsidies were small. But they were able to create a price differential, enough to induce petroleum exploration. And despite their smallness, year after year, these subsidies cumulated, as did the capital purchased to pursue these industries. The capital improved; innovation made oil capital more productive and efficient. Crude wells gave way to pumpjacks, and they have given way to the small fracking wells that are now commonly used to extract oil and gas in the United States. The oil industry is now famously or infamously powerful, with trillions and trillions of dollars of assets worldwide.

Capital cumulates. Small capital makes a profit, which is used to improve capital, and is invested in bigger, more-efficient capital. This is eventually converted into larger capital, and eventually, it is so large, systemic, and ubiquitous that it is considered infrastructure. Environmentalists complain that legal systems protect corporations; they are actually protecting the capital.

This point might be more compelling if one considers another form of capital. Human capital is the education and the job training obtained by people throughout their working lives. It includes formal education, but it also includes the training provided by employers to operate the expensive capital assets acquired to produce fossil fuels. Human capital is much more valuable than physical capital, an order of magnitude greater in terms of its value toward production of goods and services.

Consider also, that human capital is much more precious to individuals than even physical capital is to their corporate owners. In a lifetime, an individual only has a few chances to acquire human capital. To render that capital anachronistic is to impoverish. Offshore oil rigs can employ people with just a high school diploma, and can pay them $50,000 to $100,000 per year to work for six months as a roughneck. With a college degree, one can earn over $125,000 to be a subsea engineer, again for six months’ work. What are the options for these workers if they lose these lucrative jobs? For corporations, writing off physical capital is bad for business; for people, it is existentially threatening. Politicians compete over their commitment to protecting jobs for this reason.

The upshot of this bit of irony—that capitalism is not the problem but that capital investments are—is that capitalism can be a highly path-dependent phenomenon. I contend that fairly modest subsidies dating back to 1913, small but ongoing, have spawned a behemoth industry with enormous political and economic power. Just its sheer size and its collection of physical and human capital is enough to send Republican politicians scampering to curry favor. Make no mistake: it is nothing short of a miracle that the modern oil industry can float a $500 million oil rig out on the ocean, drop a tube 10,000 feet to the seabed below, puncture it and extract oil in an extremely high-pressure and unpredictable environment. But those productivity miracles are also a curse: this productivity is also a source of inertia, crowding out investment in other forms of energy production and all efforts at conservation. The efficient and productive global oil industry is a product of capitalism, but has now become an anathema to it.

What are we to do, then? How can capitalism be steered in a sustainable direction but also prevented from entrenching a new set of industries?

A new direction must be set by changing prices. Existing prices fail to account for environmental harm, and as a result, have directed entrepreneurial energy toward harmful, entrenched capital. A new set of prices must reflect the harm to the environment, but they should also minimize the potential for entrenchment. The straight subsidization of say, wind energy, may appear a century from now to be as foolish as the original oil subsidies of 1913. As much as is possible, a new set of prices should be narrowly tailored to address the environmental harm, so that the price is neutral as between technologies and methods of reducing that damage.

A new set of prices reflecting environmental impacts will no doubt produce new winners, but as long as prices closely hew to the environmental harms, they will minimize (and perhaps avoid) the picking of winners. A system of subsidies, such as those in the new Inflation Reduction Act, will indeed reduce emissions, and is certainly preferable to inaction. But a better role for government is to evaluate the harm from pollution and establish environmental tax rates representing that amount. Such action is more within the skill set of government agencies than choosing, via subsidy policy, the technologies to address environmental harms. While there remain areas in need of prescriptive government regulation (and perhaps even subsidies), the focus of governmental action should be on the harm side, leaving the technological choices for reductions to the private sector.

Environmental taxes are needed to introduce a new set of prices to steer the engine of capitalism in a more sustainable direction. The world must very soon adopt something like a global carbon tax. But there are other taxes that can and should be levied. A cattle tax is one. If we were to weigh all of the cows on Earth, the total would be greater than the combined weight of every other mammal—every human, dog, cat, whale, dolphin, horse, pig, bear, etc.—by a factor of fourteen to one. Given the greenhouse gas emissions from bovines, which regurgitate methane as part of their digestive process, a head tax on cows would also be an important step forward.

As noted earlier, environmental taxation is not at all a new idea. But previous work has centered on a tax that internalizes previously externalized social costs. Since then, economists have been arguing that environmental taxes are the most efficient way of accounting for the environmental harms of polluting industrial activities. But I make a different argument. I am saying that there is another, little-appreciated justification for environmental taxes: the need to provide a new direction for economies. Capitalism is steered by prices. If we wish the entrepreneurial energies of an economy to change direction and begin to find new ways to protect the environment, we must supply the prices to incentivize new activities and innovations in new areas.

The traditional way of reducing pollution is to regulate the sources. That means, as compared with environmental taxation, issuing rules governing their operation. Some of those rules can seem tantalizingly like a price, since they involve compliance costs. Those rules typically even provide some flexibility as to compliance methods or technologies. Polluting firms may have options for reducing their emissions to mandated levels.

But regulations lack the one thing effected by environmental taxes: a marginal price on pollution. This characteristic of making compliance cost proportional to pollution amounts is what gives environmental taxation its secret power. By scaling compliance cost to pollution levels, environmental taxation activates a different part of the corporation, and a different part of the brain. Compliance can be a creative endeavor, but is not inherently so. By contrast, minimizing a tax bill, which is the task induced by environmental levies, presents many more possibilities for reducing pollution, and thus fosters the creative process. It pains me, as a lawyer, to say this, but in terms of creative solutions, I would rather have the business and engineering parts of a firm working on it than the legal team.

This potential of environmental taxation for fostering creativity and generating innovation are cause for optimism. Innovation is a difficult thing to study empirically, as there is never a counterfactual baseline against which to measure induced innovation. But the evidence is highly suggestive. High prices induce the search for avoidance, and for alternatives. Two resource-poor countries serve as examples.

Sweden has no fossil fuel reserves. Its electricity is derived from nuclear power, hydropower, biomass burning, and some imported natural gas. Yet rather than rely on importing fossil fuels, Sweden has doubled down on its ability to innovate: it has a carbon tax of about $150 per ton. That high price has spawned a new generation of energy innovation. Sweden has created a robust energy loop that uses agricultural waste and woody biomass to generate energy. It has pioneered energy efficiency measures, including the deployment of district heating, the sharing of heat among buildings. It has gasoline cars, but cities are planned differently, and transportation is considered differently. In a country that is two-thirds as dense as the United States, people drive about half as much.

The other country that has responded to extreme resource scarcity with innovation is Israel. Israel receives enough natural rainfall to provide every person with 55 cubic meters of water per year. That doesn’t even account for the needs of agriculture or industry. Needless to say, water is extremely costly in Israel. Households pay typically orders of magnitude more than residents of the United States. And yet, Israel prospers. Compared to arid California, Israel is not quite as prosperous in terms of GDP per capita, but it is far superior in creating wealth from water: it generates 3.5 times as much GDP per unit of water. Israel, a technologically sophisticated country, uses artificial intelligence to spot leaks in sewer pipes—sewer pipes! Upon discovery of a potential leak, small robots are deployed (because people don’t generally like to crawl in sewage pipes) to go and plug the leak with the injection of an industrial putty. Drip irrigation was invented in Israel, and despite its water paucity, it is a net exporter of many of the crops grown in the country.

In both Israel and Sweden, scarcity led to high prices, which led to innovation.

Taxes are, of course, politically toxic. And these environmental taxes will shift jobs from some industries to others, to the great consternation of those coming out on the losing end. But this is always the way things have been in healthy capitalist societies. Creative destruction has always been a feature, not a bug.

Humankind stands not so much at a crossroads but a precipice. At some point, painful measures will have to be undertaken. The sooner those measures are made, the less painful they will be, because entrenched industries continue to further entrench themselves. Meanwhile, the pain of not taking those measures grows in the form of increased climate damages and climate risks, and other kinds of environmental catastrophes. Presidential economic advisor Herb Stein once said, “If something can’t go on forever, it will stop.” And the libertarian icon Milton Friedman once said, “Only a crisis produces real change when that crisis occurs, the actions that are taken depend on the ideas that are lying around. . . .
I believe our basic function is to develop alternatives to existing policy and keep them alive and available until the politically impossible becomes the politically inevitable.”

It is odd to think that Milton Friedman once thought of himself as an underdog. Fifty years hence, it seems clear that his ideas have actually been carried too far. But it is surely cause for hope that Friedman preached a form of economic thinking that had no natural allies, and was able to launch a decades-long movement. Environmental taxation may yet go from being impossible to inevitable. TEF

OPENING ARGUMENT Environmental taxes are necessary to reorient market economies. Those levies will introduce a new set of prices to steer the engine of investment in more sustainable directions.

Sustainable by Design
Author
Akielly Hu - Environmental Law Institute
Environmental Law Institute
Current Issue
Issue
5
Photo of Stephen Ressler as he explains an engineering demonstration

Environmental professionals know more than most people why we can depend on our household taps to supply on-demand, clean drinking water. We can thank federal and state legislation, the work of water utilities, and local requirements. But after regulators and scientists set the standards, who is in charge of achieving them? The answer is engineers, who form a vital yet seldom heralded component in our system of environmental protection. Whether it is reducing air emissions of hazardous substances, mitigating harmful discharges into waterbodies, or reducing toxic impurities in drinking water, it is the engineer’s craftmanship that achieves society’s public health and natural resource goals.

I’ve taken a peek into the work of engineers through an online course on these everyday infrastructure systems that, as big as they are, usually go unnoticed. One especially impressive example is the Catskill Aqueduct. This vast underground tunnel runs unseen between the Catskill Mountains and New York City for 92 miles, conveying 40 percent of the city’s drinking water supply. Built between 1907 and 1917, it uses nothing but the power of gravity to carry 600 million gallons of pure water per day to the country’s largest metropolis.

In its long voyage, the Catskill Aqueduct crosses under the Hudson River, plunging 1,140 feet below the surface of the stream on one end and syphoning back up on the other side—a six-mile journey in high-pressure concrete-lined tubes.

The engineer and teacher Stephen Ressler, Ph.D., calls this Hudson crossing “one of the world’s great civil engineering achievements.” Yet we rarely appreciate, or perhaps even know of, this impressive feat of human ingenuity and others like it throughout the United States.

Ressler provides a window into these infrastructure systems in his online lecture series Everyday Engineering. The class of 36 half-hour sessions is available through The Great Courses, run by Virginia-based The Teaching Company. Everyday Engineering provides an overview of “the products of modern engineering that have the most substantial influence on our lives,” says Ressler early on in the series. These are “the everyday technologies that surround us in our homes and workplaces, the infrastructure systems that have been so beautifully integrated into the fabric of modern civilization that they’re practically invisible and are inevitably taken for granted.”

Let’s stick with drinking water as an example. Setting up a system to provide instantaneous water for a city of millions requires a dizzying array of technologies. Engineers achieve this by “thinking systemically,” Ressler tells me during an online interview—it was a chance, via Zoom, to be able to talk back to the lecturer who appears on my laptop. According to the professor, we first need a water source. That usually requires building a dam to create a reservoir of readily available water. To ensure that the drinking water isn’t loaded with contaminants, the dam must be built “in an area often far removed from the urban zone to be served—beyond extensive development.” Once we have a reservoir, “engineers have to make sure the water in it doesn’t get stale or contaminated, and doesn’t have a lot of organic material growing in it.” Engineers must also ensure there’s extra storage capacity in case of flooding, and account for the risk of drought. Next, water needs to get to the city’s residents through a transmission system of covered channels, mains, and service lines. As the water needs to be clean before reaching consumers, it will also pass through a treatment plant, where engineers have deployed a variety of mechanical and biological processes to remove impurities. Finally, water is pumped into high-rise storage towers at the edge of the city, from which gravity will feed the lines reaching homes and businesses.

At every step of the drinking water delivery system—from collection, to treatment, to distribution—engineers must fulfill federal, state, and local environmental policy mandates. The Safe Drinking Water Act, for example, sets maximum contaminant levels in drinking water, which engineers are in charge of achieving. Engineers also need to site and design water sources to be free of contamination. And the law requires states and localities to maintain the integrity of their water distribution systems—a task handed to the engineers who create and upgrade these structures.

By the time water reaches your tap, it will have traveled perhaps hundreds of miles and undergone several rounds of disinfection and treatment. After it leaves your drain in the form of wastewater, engineering continues to carry out the mandates of laws like the Clean Water Act, treating the water through complex physical and biological processes to meet effluent limits before returning it to a local stream. In this way, environmental policy and engineering work hand-in-hand, each discipline informing the other in carrying out a blueprint for protecting public health and ecosystems.

Beyond water, everyday engineering provides us with the electrical power that fuels our lighting, appliances, hot water, and heating systems; the local roads, highways, and railway systems we use for transportation; and the solid waste collection and management systems that pick up our trash and recycling. So much of our daily lives, and our daily resource consumption, interacts with complex local and regional infrastructure networks. But many of us know little about these engineering systems, not to mention their role in environmental impacts. The connections are so overlooked, in fact, that even an engineer may not spot them at first glance.

The professor embarked on a journey of learning about sustainability while developing his Everyday Engineering course. A civil engineer by training, Ressler taught at West Point for 21 years and is now an emeritus professor there. He also served in the U.S. Army Corps of Engineers for 34 years, beginning as a combat engineer, and later becoming deputy commander of the New York District of the Corps.

Though he is quick to point out that he has no formal training in environmental engineering, Ressler’s evolution from “someone who was largely uninterested in sustainability to someone who is a true believer,” as he puts it, appears a natural progression for this lifelong learner.

Even before teaching for The Great Courses, he and his wife were “aficionados” of the online series: “We have a whole bookcase of The Great Courses CDs and DVDs.” During a sabbatical from teaching, Ressler emailed their customer service department on a whim to ask about teaching an engineering course. Since that first cold email, The Great Courses has featured a growing roster of Ressler-taught classes: In addition to Everyday Engineering, these include Understanding Greek and Roman Technology, Understanding the World’s Greatest Structures, Do-It-Yourself Engineering, and a fifth under production on catastrophic engineering failures.

Ressler is a lively instructor, with an emphatic way of speaking that’s immediately engaging. His popularity on The Great Courses site—there is a rating system—is in large part due to his trademark use of working models. These functional, miniature versions of engineering structures show how an arch bears weight, or how a dam creates a water reservoir, by demonstrating physical concepts in real time. He credits his military academy experience for this technique, as using physical models is “in the fabric of the West Point academic philosophy,” Ressler says.

While developing Everyday Engineering, Ressler began researching passive solar and other energy-efficient designs in residential housing. These principles were natural entry points into the field of sustainability; after all, “When you get a huge return for absolutely minimal investment, that’s very enticing to an engineer,” he says. But as he looked further, he began to consider how environmental issues fit into the broader purpose of engineering as a profession. “I began to think about the broader professional aspects of what it means to be an engineer, in a world where our decisions will have a tremendous impact on future generations.”

He reached a simple, yet powerful conclusion: “Sustainable engineering is just morally responsible engineering.” The revelation stems from his education in ethics as an engineer. Licensed engineers—mostly working in civil and environmental engineering—must subscribe to a code of ethics as part of their licensure process. “Protecting public health and safety is, and has always been, the paramount principle embedded in our code of ethics,” he says.

A licensed engineer himself, and an experienced evaluator of accreditation programs through his work with the American Society of Civil Engineers, Ressler began to probe the meaning of public health and safety when it comes to future generations. “Why should we place less value on their health and safety, than on that of people today?” he asks.

“Sustainability simply means my grandchildren should have access to the same resources, the same clean air, the same clean water, and the same viability of the planet into the future as we have, or perhaps better,” he says. “By the time I was done with the course, I found myself being a sustainability advocate.”

Ressler is not alone. A growing number in the engineering community have begun to recognize the importance of sustainability. One proponent is the American Society of Civil Engineers, an organization Ressler is extensively involved with. ASCE has set sustainability as a strategic goal, and recently made efforts to revamp its professional code of ethics for civil engineers. “This new code of ethics makes a very powerful case for sustainability and for the environment,” Ressler says.

In his Everyday Engineering lectures, Ressler identifies sustainability as “the most important trend in the world of engineering and technology today.” That’s because engineering is fundamental to achieving almost any environmental or sustainable development goal. For a product, building, or power source to be sustainable, environmental standards need to be “designed in from the outset,” he says.

In one video lecture, Ressler gestures toward a standard pop-up toaster on his classroom table. “Your new toaster won’t be maintainable unless the designer makes better accommodations for disassembly and repair—than this!” Stumped, he examines the unwieldy cooking apparatus. “How do I get this apart, anyway?”

For this reason, Ressler says designers are the ones principally responsible for sustainability. So how do we motivate them to embed energy efficiency and zero-waste principles? He argues the best way is for companies to recognize that taking green measures is good for business, and to voluntarily opt for these standards. For example, building a Leadership in Energy and Environmental Design (LEED)-certified headquarters building or other facility can help boost a corporation’s public image, and cut energy and water supply costs at the same time.

Government action and incentives are also powerful tools. “For instance, building sustainability requirements into the codes and standards that we use as the basis for design,” Ressler says. These include building codes, which are developed with technical input from professional societies like the American Society of Civil Engineers before being adopted by government entities. ASCE and other societies are currently updating codes to reflect higher standards of sustainability. The good news is, “Once you build sustainable design into the codes and standards, engineers will comply. There’s no question about it.”

Despite the many ways technology helps fulfill society’s environmental goals, engineering has also played a role in some of history’s greatest environmental catastrophes. Ressler spoke with us as he wrapped up post-production on his newest course, Epic Engineering Failures, which covers case studies like the Chernobyl nuclear meltdown and the Deepwater Horizon oil spill. These incidents offer fresh insight on how engineers can contribute to sustainability: “The most important answer is, ‘Don’t build nuclear power plants that explode and spew radiation across an entire continent,’” Ressler writes in a tongue-in-cheek email.

In reality, the reasons for these colossal failures are surprisingly complex. Ressler argues that one contributing factor for some disasters is a widespread policy governing the engineering profession, called the industrial exemption.

Like lawyers and doctors, engineers require a state-issued license to do certain types of work. But the industrial exemption policy means that engineers working in industry—a definition that varies from state to state—are not required to obtain a license from professional bodies. “Essentially, the corporation takes on both the responsibility and the liability that’s normally associated with the work that a licensed professional does,” Ressler explains. He cites a 2015 law review article by Paul Spinden that calls engineers “a striking enigma” compared to other licensed professionals. “An overwhelming majority of engineers—somewhere around eighty percent—do not pursue licensing as a professional engineer,” Spinden writes.

Licensing holds engineers accountable to a high standard of protecting public health and safety—values espoused in the code of ethics required for licensed engineers. As Ressler puts it, a licensed engineer “is operating under an ethical obligation to protect public safety, and can and will lose his or her license if some aspect of the design turns out to be flawed to the extent that somebody gets hurt.”

The industrial exemption warps this picture. “Industrial exemptions are granted to engineers in corporate bureaucracies—an organizational structure that causes them to be completely beholden to their managers, and incapable of exercising the sort of autonomy that an engineering professional should be able to exercise to protect public safety,” Ressler tells me.

He points to the Deepwater Horizon oil spill as one instance where the industrial exemption likely played a role—the disaster is one example in his Epic Engineering Failures series. The 2010 drilling rig explosion released four million barrels of oil into the water, led to the death of 11 crew members, and polluted the region’s unique ecosystems. To date, BP has paid more than $60 billion in penalties, natural resource damages, and other costs.

A series of flawed management decisions led to the tragedy. In a 2011 report to President Obama, the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling identified several issues in the cement job designed to seal the bottom of the Macondo well. BP managers and BP’s contractors also failed to adequately perform tests to check for leaks in the well. For one of these tests, the commission concluded, “It is now undisputed that the negative-pressure test at Macondo was conducted and interpreted improperly.” The report goes on to find that any final chances to catch issues during the temporary abandonment of the well were also missed. As Ressler says, “It was an amazingly multilayered failure.”

Deepwater drilling is inherently risky, the report makes clear. Yet despite these risks, the commission found that the largest offshore oil spill in U.S. history was “avoidable,” and “can be traced back to a single overarching failure—a failure of management.”

Indeed, “Better management by BP, Halliburton, and Transocean would almost certainly have prevented the blowout by improving the ability of individuals involved to identify the risks they faced, and to properly evaluate, communicate, and address them,” the report states. Addressing these risks would have required leaning on the specific know-how of engineers: “BP did not have adequate controls in place to ensure that key decisions in the months leading up to the blowout were safe or sound from an engineering perspective.”

The commission stated that the incident’s “root causes are systemic.” Ressler, Spinden, and others in the pro-licensure community argue that one of these systemic causes was the industrial exemption. When all liability and responsibility are in the hands of corporate managers, industry engineers lack the professional autonomy needed to make judgment calls to ensure safety. “That doesn’t mean failures will never occur when the work is being done by licensed engineers,” Ressler clarifies. “But it sure seems to me that these mega disasters are often the result of unlicensed engineers operating within corporate bureaucracies, where the engineers’ professional judgments about safety are constrained by conflicting corporate priorities.”

Industrial exemption is “a scandal that nobody wants to take on because the corporate interests are very strong,” Ressler tells me. From his perspective, it’s an issue raised “only in the fairly obscure literature of engineering professionalism in the licensure community.”

According to Spinden, “The industrial exemption is a natural outgrowth of a profession, which, from the outset, has been closely allied with the industrial firms it serves.” But engineering goals can sometimes be at odds with business motives. On this tension, the presidential commission concludes, “Whether purposeful or not, many of the decisions that BP, Halliburton, and Transocean made that increased the risk of the Macondo blowout clearly saved those companies significant time (and money).”

Ressler chooses stronger language to describe the issue. Flawed management decisions “were made in pursuit of corporate objectives,” he says. “They were all about making profits to the exclusion of following good engineering procedures and following sound safety procedures. It was a wanton disregard of all these safety systems that are designed precisely to prevent this kind of a problem.”

Spinden writes that other major engineering disasters have been influenced by the industrial exemption, including the 1986 Challenger tragedy and innumerable lawsuits for faulty consumer products. He argues that to prevent future incidents, “Nothing short of outright elimination of the exemption will be enough.”

Ressler agrees wholeheartedly. “Nothing about our current system has changed to cause me to believe that another Macondo blowout isn’t in our future.”

As we consume increasing amounts of fossil fuels, many of today’s most challenging natural resource and pollution issues will continue to be implicated by the inherent risks of the engineering required to drill in deeper and more remote areas. On the flip side, engineers will play a key role in a transition to renewable power, designing the energy sources and building a reliable electrical grid to reach our climate mitigation goals. Investments as a result of the bipartisan infrastructure law will only accelerate these efforts.

This ever-growing role of engineering in our lives means that understanding modern environmental policies is almost impossible without basic knowledge of these technologies. In Ressler’s Everyday Engineering course, he points out that “issues surrounding the environmental impact of technology can be quite complex.” For this reason, “unambiguously correct responses to these issues are rarely available.”

An illustrative example is the ongoing debate over methane recovery systems in landfills. Under the Clean Air Act, landfills are required to systematically collect and control landfill gas emissions for safety reasons and to reduce the climate impacts of methane, a product of the breakdown of organic matter. Landfills used to fulfill this requirement by burning off the gas with controlled flares. But nowadays, gas recovery systems—which either burn the gas to produce hot water or electricity, or process it to sell to a local gas utility—are becoming increasingly common. Some environmental groups have opposed these projects, arguing that the system will incentivize landfill owners to maximize gas production. To do so, owners may delay the installation of a landfill cap, which seals the top of a dump site to minimize water infiltration. That’s because if more water seeps in a landfill, solid waste will decompose more quickly, and thus produce more methane.

Ressler uses this broad-stroke issue to highlight his most compelling argument for learning about everyday engineering: it can make us all better citizens. Back in his online classroom, he shows us his local newspaper, covered in headlines about new investments in the local highway and controversies about shale gas. “As an engaged citizen, I should take well-reasoned positions on issues like deregulation of the power industry, public investment in transportation infrastructure, and the environmental impact of shale gas exploration—and I should consider these issues when I vote for the officials who influence these policies. But can I really take a well-reasoned position on any of these issues without some understanding of the associated technologies?”

To find out, stay tuned. TEF

Akielly Hu is associate editor of The Environmental Forum. You can nominate people for Profiles by emailing her at hu@eli.org.

PROFILE Stephen Ressler’s online courses portray the behind-the-scenes work of the engineer in achieving society’s goals, including environmental protection. He tells us how the profession can lead the charge to a green economy, and why policies should play a role.

The Clean Water Bill
Author
George Hawkins - Moonshot Missions
Moonshot Missions
Current Issue
Issue
5
beach shack advertising swimming masks for 5 dollars, fishing rods for 4 dollars, and clean water for 500,000,000,000 dollars

Contrary to the impassioned counsel of William D. Ruckelshaus, his EPA administrator, President Richard Nixon vetoed the Clean Water Act of 1972, stating in part “that we attack pollution in a way that does not ignore other very real threats to the quality of life, such as spiraling prices and increasingly onerous taxes. . . . Even if Congress defaults on its obligations to taxpayers—I shall not default on mine.”

Senator Edmund S. Muskie, Democrat of Maine, perhaps the single most important proponent of the bill, thundered back, “Can we afford clean water? Can we afford rivers and lakes and streams and oceans which continue to make life possible on this planet? Can we afford life itself?” Two hours later, the Senate voted 52-12 to override the veto, and the House followed with an astounding vote of 247-23. The CWA became the law of the land.

This monumental step taken 50 years ago, on October 18, 1972, highlighted fundamental issues that continue to govern debate on protecting the environment: how clean is clean, how much are we willing to spend, and how can we incorporate the best technology and practices to achieve our best answer to the first question, with the lowest cost answer to the second.

We argue here that while Senator Muskie was a visionary in 1972, President Nixon’s words have come back to haunt us 50 years later, even if for reasons he could not have imagined. We make this case recognizing that parallels to 1972 are uncanny—for today the country is preparing the largest federal investment in water infrastructure as part of the American Rescue Plan and the Infrastructure Act since the construction grants program funded by the CWA in 1972.

The CWA regulatory structure and complementary federal funds delivered one of the great policy successes in history. Yet, mainly due to that success, the ARP and infrastructure investment today risks being ineffectively spent at best, and mis-spent at worst—which would be a tragic missed opportunity. Changing that outcome may require modifying the statute today with the same courage and foresight that was evident in 1972.

The hallmark of the CWA is its function as a technology-forcing statute. In 1972, the Senate, borrowing on experience from the Rivers and Harbors Act of 1899, championed the radical approach of prohibiting discharges of pollutants, mainly from “point source” industrial and municipal wastewater treatment plants, unless governed by standards imposed by a permit. Standards are developed by analyzing what is achievable by the best available technology, without needing to assess the water quality of the receiving waters. The goal is to achieve waters that are fishable and swimmable by eliminating pollution as technology standards ratchet down over time. Hence the name: the National Pollutant Discharge Elimination System.

Also included in the CWA were improvements championed by the House to the existing system featured in the CWA’s predecessor laws dating back to 1948, which authorizes states to develop standards based on the quality of the receiving waters. Water quality standards were criticized as ineffective prior to 1972 and took a back seat to technology standards for years. Today, these standards are gaining a new life in the Total Maximum Daily Load program. Water quality standards, when they are successful, still aim mostly at the same target: point source dischargers, often adding additional limits to a NPDES permit.

Looking back, Senator Muskie was right: the CWA technology-based permitting system, along with the grants program that helped build a generation of treatment plants, has been a spectacular success. In 1972, my hometown waterway—the Cuyahoga River—was famously flammable, and in one species count, exhibited six fish (not species—six fish!). Today, the Cuyahoga River is home to more than 60 species of fish. The best bass fishing on the Potomac River, once called a stinking disgrace by Secretary of the Interior Walter Hickel (and mistakenly attributed to President Johnson), is just downstream from the District of Columbia Water and Sewer Authority’s (“DC Water”) huge Blue Plains Wastewater Treatment plant. Look down when you fly into Washington Reagan airport—you may be able to see water emanating from the plant creating a clean blue oasis surrounded by the often-brown water of the Potomac.

Herein, though, lies the lesson. Blue Plains treats about 300 million gallons of wastewater every day—enough to fill a professional football stadium. Most of this water comes from the Potomac, which after its use by people in a 725-square-mile region surrounding Metropolitan Washington, is transported to Blue Plains through 1,800 miles of pipes. Over time, as clean water standards have tightened, Blue Plains has removed an ever-larger percentage of pollutants before recycling the water back into the Potomac, which has therefore seen spectacular improvements.

The technology-forcing approach is relentless though, for the standards continue to ratchet down even as the costs rise, often exponentially. Take nutrients, the byproduct of the food and waste we flush down our toilets and sinks as well as animal waste, chemical fertilizers, leaves, and other organic refuse that wash off our streets, yards, and natural lands. If too many of them flow into our rivers, they function just like fertilizer. Algae and other plants thrive and die—a lifecycle that absorbs oxygen—and other organisms, including fish, oysters, and many other species, literally choke from the lack of oxygen.

In 2000, Blue Plains adopted the first phase of technology required under the CWA to reduce the concentration in its effluent from 14 milligrams per liter to 7.5 mg/L, which removed 7.3 million pounds of nitrogen per year at a cost of just more than $16 million. In the next phase, which ended in 2010, Blue Plains reduced nitrogen concentrations to 5 mg/L, which amounted to a reduction of 2.9 million pounds per year at a cost of about $130 million—about eight times the original price tag. In 2010 the permit ratcheted down again, mandating that the facility further reduce nutrients to 4 mg/L, or an additional reduction of 1.2 million pounds per year. This incremental reduction cost nearly $1 billion. The capital cost to remove one pound of nitrogen thus increased about 380 times. What was a budgetary hiccup became a major expenditure for Blue Plains and the ratepayers it serves.

The rote application of technology-based standards triggers an even more staggering consequence in the systems that address an ancient challenge: how to manage sewage and stormwater collection in our cities. Hundreds of older urban communities were built with sewers that collect both sanitary flows and stormwater runoff in the same pipe—called combined sewer systems, or CSS. Other cities have separate sanitary sewer and stormwater systems, which goes by the abbreviation MS4. Some cities have both.

Combined sewers fill with stormwater in larger storms and were designed to overflow during these events—sending untreated sewage and stormwater conveying all the detritus of city streets, parking lots, lawns, and the rest, to local waterbodies. Separate sewers are also prone to overflows of sewage due to blockages and lack of maintenance. MS4 pipes convey stormwater directly to local waterbodies, with many of the contaminants just mentioned but virtually no treatment prior to discharge in every rain event. Sanitary System Overflows are known as SSOs and Combined System Overflows are CSOs. Together with MS4 discharges, all three violate the CWA by threatening water quality, public health, and the environment.

EPA reported to Congress about this challenge in 2004—still one of the best summaries of the issue. The agency said that 746 communities had combined systems, with 9,348 CSO outfalls that discharged about 850 billion gallons of untreated sewage and stormwater per year. EPA estimated that between 25,000-75,000 SSOs occur each year, releasing an additional 3 to 10 billion gallons of untreated sewage—in this case not only to waterways, but to basements and city streets as well.

Remarkably, EPA’s first major CSO policy statement was in 1989, followed by guidance in 1994 and 1995, and then the first major implementation date for the so-called “nine minimum controls”—25 years after the passage of the act. There are many reasons for this enormous lag in performance, which in large part comes down to the difference between working “inside the fence” and “outside the fence” of treatment facilities.

First came improvements to treatment plants that involve adapting technologies and building new structures into existing facilities. Maintenance and upkeep follow—but within the controlled and visible environment “inside the fence.” Most publicly owned treatment plants are more alike than different, and uniform effluent standards are therefore easier to develop and apply.

Second came improvements to sewer and stormwater collection systems—which means working in the city, with the cost of the new equipment dwarfed by the reality of digging up streets. Maintenance and upkeep are more difficult, requiring crews that maintain a sprawling system “outside the fence” but hidden from the public and utility alike. The structure of combined and separate systems in each city is as diverse as the thousands of cities themselves.

The difference in venue is compounded by the answer to an ancient debate: should sanitary sewage be captured in pipes that are separate from the pipes that capture stormwater runoff, or should these flows be combined? From the beginning, there has been a fascinating interplay between technology and the solutions to wet-weather flow.

The common denominator, though, is that the expense of managing sewage and stormwater flows is immense—and involves not just the design of treatment and technologies, but the design and function of every city street, commercial building, residence, and public space. In this context, most consider combined sewers the overriding challenge, but a closer look suggests a more nuanced set of pros and cons.

Combined sewers in dry (non-storm) flow conditions, and at the first flush in wet weather, can intercept most of the flow and convey it for treatment—a clear positive. The first flush in any rainfall event includes the most contaminants and refuse, and channeling that flow for treatment is very beneficial. Yet in major rain events, combined systems overflow to urban waterbodies—a clear negative.

Separate sanitary systems intercept all sewage flow for treatment—a clear positive, although these systems are also subject to blockages and overflows. The MS4 system routes stormwater out of city streets—a clear positive. Yet the separate stormwater flow receives little or no treatment prior to discharge to urban waterways—a clear negative.

For either situation, the CWA is clear. The statute prohibits discharges of pollutants unless governed by a permit based on achievement of technology and/or water quality-based standards. In practice, though, applying standards to such complicated systems has yielded a fascinating combination of site-specific management requirements paired with preferred, if frequently rudimentary, technological requirements.

For combined sewers, as an example, EPA and states develop standards from an amalgam of modeling conjecture paired with an evaluation of common control technologies. Since events that trigger a violation are related to the capacity of the CSO system to manage stormwater flows, the best available technology is founded on increasing that capacity with a variety of hard or gray infrastructure solutions. Most common is either replacing combined systems with separate sewage and stormwater pipes (with all the pros and cons noted above), and/or the building of storage basins of various designs and deep storage and conveyance tunnels to capture overflow until after the rainfall and excess flow recedes. Importantly, many cities now are building green infrastructure that can capture and retain or delay stormwater before it reaches the combined sewer or MS4 system.

This approach made sense when it came to the forefront in the 1990s. Rainfall and flow modeling based on data from selected hydrologic years predicts how much excess flow is to be managed and where, and how many overflow events are likely to occur. The utility is required to implement a set of nine control mechanisms to gain relatively speedy reduction in the number of overflows, followed by the negotiation of an aptly named Long Term Control Plan to reduce the number and volume of overflows to a magic number predicted in the models over the following 20 to 25 years.

The result is that although the language and process of a NPDES permit is used, the approach is actually like a site-specific negotiation more typical for water quality standards based on the conditions in the system. But the technology-based rigidity takes the lead in the final plan—for the analysis often boils down to a target number of overflows that are projected to occur in any given year, which look awfully like common discharge limitations in a NPDES permit.

This approach makes sense until the cost of the solution is considered, in parallel with assessing whether reducing overflows to a specific number is the optimal use of public funds to improve water quality. Let’s return to Washington to understand this approach. All three of DC’s main waterways were polluted each year by more than 3 billion gallons of untreated sewage and rainwater overflowing from antiquated combined sewers. Moreover, peak wet-weather flows of up to 1 billion gallons per day arriving at Blue Plains during storm events did not benefit from full treatment.

In response, as outlined in its NPDES permit and two consent decrees, DC Water has been required to implement the nine minimum controls at a total cost of $40 million, which reduces projected overflows by 40 percent. Then it must expand the capacity of secondary treatment at Blue Plains for peak flows and extension of the tunnel system to capture additional flow prior to treatment at a total cost of more than $900 million, which reduces nutrient discharges and increases the CSO capture percentage. Last, it expanded the capacity of the collection system to store and convey peak flows by building giant underground tunnels, including building additional treatment for excess flow at a cost of more than $2.5 billion, with projected additional reduction of overflows by 56 percent.

DC Water avoided the other common solution, except in limited areas, which is to reconstruct combined systems into separate storm and sanitary. For DC, and most other cities, this option is cost prohibitive. After years of difficult negotiation, DC Water added a pilot to test whether green infrastructure, or GI, might become a more important part of the solution. If successful, it would eliminate the need for one of the planned tunnels. Each solution, including GI, is expensive to plan, design, and implement—and involves disruption to the community during construction.

Today, after more than ten years of construction, DC residents are seeing steady improvements to water quality. Yet they will never see the main solution, which is the most expensive public works project since the construction of the Metro system, because it is hidden underground and sits idle until a storm hits. (In comparison, a heralded school rebuilding program was targeted to spend $1 billion.) This solution has caused sewer rates to increase by a factor of three—from just about $40 a month to more than $120 a month, with further rate increases projected for the next decade at least.

Hundreds of CSO cities face challenges and costs of this magnitude: Northeast Ohio Regional Sewerage District serving the metropolitan Cleveland area is building a $3 billion solution; Kansas City faces a $2.5 billion bill, with St. Louis perhaps facing the highest cost at $4.7 billion. Medium and smaller cities face enormous costs as well, from an initial projected cost of more than $3 billion in Cincinnati and $1.4 billion in Buffalo to nearly $900 million for South Bend, Indiana.

Readers should step back and consider the size and scale of these numbers. The CWA is requiring expenditures that dwarf municipal budgets and present grievous affordability challenges for low-income ratepayers and transfer precious local funds from other priorities for schools, housing, and the rest. I can imagine what President Nixon might have said about these costs. We can also imagine how Senator Muskie would have responded, voiced today by environmental and community activists, “Can we afford not to have clean water? Can we afford life itself?”

Muskie and Nixon were both right. The senator was right that an investment in clean water is protecting life itself, and in economic terms, can be a wonderful investment. Improvements at Blue Plains have helped the revitalization of the Chesapeake Bay. The Maryland Department of Natural Resources in 2004 placed the value of the Chesapeake Bay to Maryland and Virginia at more than $1 trillion, with an annual economic benefit of $33 to $60 billion. We support investing in clean water if we believe the money is being invested in the right projects and is accomplishing the right outcomes.

Yet Nixon was right too—society must be careful about the expenditures of such vast sums of public money. The CWA’s strength is its relentless drive to mandate technology-based standards to drive down pollutants. Its weakness is that there is no mechanism to place this process on hold once the costs overwhelm the benefits, or to shift focus to different sources and solutions.

Let’s turn to Cincinnati and South Bend to explore this concept. The Metropolitan Sewer District of Greater Cincinnati serves a population of 850,000, spread across 290 square miles. MSD manages both combined and separate sewer systems, which were estimated to overflow 11.5 billion gallons of combined sewage into the Ohio River and its tributaries every year. Applying the standard NPDES process to solve this problem—modeling of overflow volumes and events paired with the setting of standard solutions, mainly deep tunnels and additional storage—EPA estimated the cost to be $3.1 billion in 2002.

In my experience, groundbreaking innovation occurs when a team of the right people come together to face what seems like an insurmountable challenge. This was the scenario for MSD: a $3.1 billion price tag was unimaginable for a region comprised of many low-income neighborhoods and a struggling rust-belt economy. MSD Director Tony Parrott, Deputy Director Biju George, Watershed Director Mary Lynn Loder, and Principal Engineer Reese Johnson, along with a few bright and creative consultants started to develop an alternative solution.

The first flaw they discovered was the use of modeling to accurately predict future overflows, which drove the selection and scale of the remedy. When comparing actual data to the model, the team was stunned with how often it was wrong: over- or under-predicting overflows. The solution was to recalibrate the model after each event, and to add significant margins of safety just to be sure. The result, though, was to drive remedies to a larger and more expensive scale, with the parallel risk that these solutions would in fact be more than needed in some places, and less in others.

A simple question from Biju George triggered a revelation. “Why can’t the data be the model?” As sensor technology, computer power, and use of predictive models were expanding with rapid speed, the team began to consider how a model based on data from past years could be replaced by a model that reflects events in real time. This is not really a model, but a “virtual twin” that integrates current rainfall and discharge data along with data from digital sensors to provide operators with an accurate picture of exactly what is happening in the system, which then can be paired with algorithms to improve real-time control decisions on how to respond.

The next revelation came after the virtual twin “turned on the lights” in the sewer system. Assets associated with MSD’s Mill Creek Interceptor, both upstream and downstream, had been managed separately as point sources or facilities, but never as a system. Building the virtual twin revealed that rain does not fall uniformly and that different parts of the system overflowed at different times, and that adding carefully located gates and storage to hold some flow in certain places enabled more capacity for flow in others, optimizing the performance of the system and decreasing overflows.

To improve water quality, the team also learned the importance of determining when flow is captured for full treatment, rather than just the total quantity. Capturing most of the first flush for treatment yields far more water quality benefits than capturing larger quantities of flow later in the storm. The real-time system, gates, and storage can be manipulated to ensure this outcome.

Developed without any certainty that EPA and state officials would accept the approach, MSD implemented a strategy that reduced overflows by 250 million gallons at a savings of 90 percent compared to the 2002 consent decree. The MSD team had reduced the cost of capturing combined sewage from almost 25 cents per gallon to a single penny. After years of patient negotiation, regulators finally accepted this outcome.

At nearly the same time, the city of South Bend, an urban area of 100,000 residents with more than 20 percent living in poverty, faced a similar challenge. Just a year after Cincinnati’s 2002 consent decree, South Bend entered negotiations on its Long Term Control Plan. The determination, once again relying on hydrologic year models and tunnels and storage, would cost between $800-900 million. The consent decree memorializing the LTCP was signed in 2011, with the model-generated number of overflows to be reduced from approximately 80 times a year to only four.

Just like MSD, a group of public servants and consultants banded together to find an alternative—since the price tag was unimaginable for a small Rust Belt city with high poverty levels. South Bend also devised a program premised on real-time data from rainfall, discharge, and digital sensors. Turning on the lights yielded a similar outcome: a virtual twin that optimizes existing capacity, enhanced by targeted additional storage, reduced overflows by 70 percent at a cost reduction of $500 million when compared to the 2011 consent decree. After a decade of patient and persistent effort, EPA and the Department of Justice approved a modification to their consent decree memorializing this new approach in 2021. South Bend’s Mayor Pete Buttigieg, now President Biden’s secretary of transportation, ran for president partly on the prowess of this smart sewer approach.

Deploying sensors and artificial intelligence has benefitted sanitary sewer systems as well as combined sewers. In San Antonio, Texas, the challenges were blockages, back-ups, and overflows. Each blockage could cause an overflow of pure sewage into waterways, basements, and city streets—and triggered the immediate deployment of crews to stop the discharge and clean up the sewage, seeking to minimize the damage to public health and the environment, not to mention property and commercial losses.

EPA and the San Antonio Water System, SAWS, entered into a sanitary sewer overflow consent decree to govern a response. EPA’s best technology response for SSOs is outlined in its Capacity Management Operation and Maintenance program that outlines an inspection and cleaning practice modeled from data based on past overflow events (sound familiar?). The SAWS program based on CMOM was estimated to cost $1 billion.

Facing such a staggering bill, SAWS officials also felt compelled to find an alternative and began working with technology providers to deploy sensors underneath manhole covers to provide real-time data on sewer flows. Automated software spotted anomalies: if a blockage was upstream of the sensor, the flow would start to decrease because the blockage would act like a dam holding back flow. If the blockage was downstream from the sensor, the flow would rise as sewage backed up behind the blockage, again, just like a dam. In both cases, though, crews were dispatched to investigate and if need be, clear the blockage long before it became severe enough to cause an overflow.

Turning on the lights again drove dramatic results. Rather than dispatching crews to clean on pre-scheduled cleanings, crews were only dispatched when actual problems were revealed by real-time data. SAWS reduced the 1,246 anticipated cleanings to 65 actual. Yet with the 95 percent decrease, SAWS’s program simultaneously eliminated many SSO overflows and is credited with 216 SSO “saves”—blockages that were discovered and resolved before an overflow was triggered. The return on investment for SAWS is estimated to be 115 percent.

Today, these approaches and their technologies continue to evolve rapidly. Many cities are reaping the considerable benefits from turning on the lights, from the Buffalo Sewer Authority, which has saved at least $400 million in its LTCP plan while reducing overflows more quickly than planned, to Grant Rapids, Michigan, which has reduced the infiltration and inflow into its sanitary system that was causing overflows at a cost of $30-50 million rather than the projected $1 billion.

Tony Parrott, whom we last saw in Cincinnati, is now the general manager of the Louisville and Jefferson County Metropolitan Sewer District. He has perhaps the best story to tell. Louisville was one of the early adopters of real-time controls, and is using discharge and rainfall data, and sensors installed throughout a watershed, to target every tool in the arsenal—condition and discharge data to prioritize updates to treatment facilities, a virtual twin that enables both automatic use of gates and local storage to maximize storage in the collection system, flow sensors to identify sanitary blockages before they happen, and flow data to target the best locations for GI to retain stormwater in beautiful local greenways. He has achieved a second modification of Louisville’s consent decree. It now contains perhaps the most ambitious integrated plan, which targets scarce public funds to the greatest risks.

The reality of these stories, though, is that they turn the CWA on its head. No longer is the CWA forcing the adoption of new technologies. Without intervention, the CWA, EPA, and state regulators drive the implementation of solutions that are steeped in the technologies of the past, are extraordinarily expensive, and frequently yield meager environmental results. Municipalities seeking to protect the environment without facing financial oblivion take matters into their own hands and devise alternative solutions. Although most have ultimately been able to gain approval for these directions, cities endured years of diligent, expensive, and persistent efforts to persuade risk-averse regulators.

This reality must change. The good news is that many in EPA and the states understand that change is possible, and various written policies support these directions. For example, the agency now encourages a utility to adopt integrated planning—which assesses all the needs for the facility to support a plan to target the most efficient and effective steps to improve water quality in a series of steps, rather than the rote application of the next reduction in standards driven by whatever permit provision comes next. This is a step in the right direction, but far more must be done.

First, the CWA should be updated to shift the focus from best available technologies on individual point source discharges, to turning on the lights on the real-time conditions of treatment plants, sewage and stormwater collection systems, and the watersheds in which they reside—as a system. Based on existing discharge information and available weather data, mandating the deployment of sensors will deliver insights on the health of each waterbody, and where and how public dollars can be spent to yield the best improvements to water quality.

In this manner, we can finally integrate the two systems incorporated into the CWA in 1972. Rather than being separate from water quality-based standards, technology will enable us to understand the water quality of a system, identify sources of pollutants and problems, and target the best new technologies—including automated gates, targeted storage, anticipated cleanings, and multi-benefit green infrastructure. Hundreds of millions, if not billions, of dollars will be saved from projects that are otherwise over-designed, based on outdated models and technologies, or simply not needed. And paired with savings, water quality and the environment will be better protected than ever before.

Second, the CWA needs to empower EPA to mandate new standards for the operation and maintenance of water systems. The success of San Antonio, Grand Rapids, and many other communities shines a light on how low-cost sensors, cell networks, and artificial intelligence have radically improved the operations of these systems, reducing overflows and risks to public health and water quality, and at radically lower cost than past practices. This is a breathtaking success and should become the law of the land. EPA must shift from being open to these strategies to requiring them in the first place.

Some will balk at such mandates, as they would yield changes to others—including a hiatus on the automatic ratcheting down of point source discharges absent a compelling water quality improvement in the receiving water revealed by the virtual twin of the system. I would counter that these technologies are no longer new or innovative and are nearly as well understood today as secondary treatment was in 1972. The environmentalist in me believes this is the step we need today to achieve the next generation of protection at a price we can afford.

Third, the CWA needs to be amended to create a safe “box” in which regulators, utilities, and community and environmental advocates can experiment with new approaches to adopting the speed and range of new technical capabilities to achieve water quality goals. This approach was tried during the Clinton administration with Project XL, which was designed to foster pilots to improve the regulatory system. As a member of the XL team, I discovered that without a statutory safe harbor from the required mandates of the CWA, with appropriate safeguards of course, no real experimentation was possible. Every major system, regulatory or otherwise, needs to be able to challenge its assumptions and innovate new approaches in the face of changing times.

Finally, these changes should be made soon, so that funding from ARP and the Infrastructure Act can be targeted to help pay for these new mandates, just as the construction grants program in the 1972 CWA paid a large measure of the cost of secondary treatment. Without change, infrastructure funding will support replacing our old systems with newer systems that look just like them. What a shame that would be. We should encourage and maybe even require that this expenditure of public funding builds a new system that reflects the best of what we can offer, permanently improves the environment, and reduces the cost of water systems, and thus yields benefits to future generations.

The 1972 CWA was a crowning achievement of its generation, and has delivered breathtaking and lasting benefits to the water quality and environment of this great country. Fifty years later, we have the knowledge, experience, and capability to update the act to improve the waters of this country for the next half century. Our forebears were bold in 1972 and it is our turn to take bold action in 2022. TEF

COVER STORY Cities facing huge costs for implementing technology- based standards for storm runoff and sewage treatment are now looking beyond the letter of the federal pollution law to achieve superior water quality gains at substantially lower costs. It is time for the CWA to catch up.

3D-Printers and Maker-Spaces: Improving Health and Environmental Sustainability Through Voluntary Standards
Author
University of Vermont for the Environmental Law Institute
Date Released
August 2019
3D-Printers and Maker-Spaces

This report is the culmination of a three-month investigation into the nature of 3d-printing with regards to potential social and environmental implications. Three graduate students from the Sustainability Innovation MBA program at the University of Vermont teamed up with members of the Environmental Law Institute to identify these implications and offer recommendations for sustainability within the specific sector of maker-spaces in the 3d-printing industry.

Blockchains: Environmental Hype or Hope?
July 2018

Washington, DC: The hype around blockchains—the programming protocol originally created for the Bitcoin—is bidirectional, ranging from apocalyptic predictions of bitcoin energy use that will “destroy our clean energy future” to rosy scenarios that “blockchain technology can usher in a halcyon age of prosperity for all.” The question for policymakers, therefore, is how to ensure that the environment profits in the end.

Fix Title VI
Author
Michael Curley - Environmental Law Institute
Lindsay Haislip - Cambridge Associates
Environmental Law Institute
Cambridge Associates
Current Issue
Issue
6

COVER STORY ❧ The Clean Water Act’s state revolving fund program was created in 1987 to finance wastewater treatment plants — but it could provide the funding for so many other environmental improvements if we use creative incentives to tackle 21st century problems.