Are We Endangered Yet? Artificial Intelligence and the Human Species
Author
Oliver Houck - Tulane University
Tulane University
Current Issue
Issue
2

Five years ago, Google cofounder Sergey Brin said, “You should presume that someday we will be able to make machines that can reason, think, and do things better than we can.” That someday had already arrived in 2014 when this statement was made, and it is not standing pat. Artificial intelligence, or AI, had beaten the best chess players in the world and gone on to top the grandmaster of Go, a uniquely complicated and intuitive game, which it opened with a move so unorthodox that its opponent left the table, flustered. Fifty moves later the computer won. It had created its own strategy.

Just as recently, two Facebook chatbots were programmed to conduct negotiations over small consumer items (hats, baseball bats), each seeking the best bargain, and soon began using a language of their own. Facebook shut them down. About the same time, Google developed a translation tool between English and other languages and found, likewise, that its program had developed an approach that bypassed English altogether. Google liked the result, and kept it going. Both Facebook and Google dialects had become incomprehensible to humans. What could possibly go wrong here?

Perhaps the most user-friendly take on this question is a TED Talk by Nick Bostrom, a philosopher and technologist, entitled, “What Happens When Our Computers Get Smarter Than We Are?” You can find it easily on YouTube, and it is worth watching. Bostrom posits that machines can preserve our humanity, or they may just leave us behind.

He begins with the speed of change. If, as he imagines, our species appeared on Earth only one year ago, then the industrial era began within the last two seconds and the first computers appeared in the latest blink of an eye. This progress was driven largely by human intelligence. The human driver is now challenged by computer “superintelligence” that learns, and then decides, on its own. When Bostrom asked two groups of AI experts at which point would there be a 50 percent probability for computers to perform “almost any job as well or better than humans can do,” the median answer was between 2040 and 2050. We do not yet know, of course, but surprises come daily.

The reasons are simple physics. While our biological neurons fire at about 200 times a second, computer signals can travel at the speed of light. Our brains are furthermore couched inside one small cranium, while computers can be housed in warehouses, in city blocks of buildings. All of this potential lies dormant, waiting to be tapped. When it is, we will see a superintelligence explosion, and our own fate may depend on what it does. “Think about it,” Bostrom proposes, “machines will be better at inventing than we are, and doing it on digital time scales.” At which point the possibilities are unfathomably large.

For Bostrom, this raises two concerns. One is that if we created a “really powerful optimization process” for this superintelligence in order to obtain X, “You better be sure that X includes all we care about.” King Midas asked for the golden touch, and got it. He put his hand on his daughter and she turned to gold, and then his food became cold metal under his touch. (Query: The point is well taken, but is programing “all we care about” even possible?)

A second concern is that, once engaged, there may be no off-ramp. We of course could program one, but a yet more intelligent machine could find a work-around by hook or crook. Hackers and entire countries will try. Sooner or later, he says, “The genie will get out of the bottle.” Then what? We are not at the point of knowing this either. Meanwhile AI is going gangbusters, on the lips and in the labs in institutions far and wide, inventing more boldly every step. CRISPR, IoT, Blockchain etc., none with brakes and no stop signs in sight. Asked recently whether limits of some sort might be necessary, one enthusiast replied “I certainly hope not!”

Bostrom is not alone. His book on the subject, Superintelligence: Paths, Dangers, Strategies, was followed by inter alia Max Tegmart’s Life 3.0: Being Human in the Age of Artificial Intelligence, and now Brett Frischmann and Evan Selinger’s Re-Engineering Humanity. All three, and a growing body of articles as well, describe the same phenomenon. Which, as with humans, is a bag of mixed blessings that we started opening years ago.

One is the simple fact of work. Technology began changing humanity from the days of the wheel and the plow, with largely beneficial outcomes. Industrial technology brought more benefits, including steady employment, which held true until the machines began erasing the workers, but there remained a demand for minds to plan, evaluate, and make decisions. With advances in AI even these functions yield, shrinking the work force further and dividing its rewards yet more starkly between haves and have-nots. In the taxi industry, GPS has removed the need to know a service area, followed by Uber, Lyft, and virtually anyone with a driver’s license, to be followed next by automatic vehicles that remove the driver altogether. Who wins in this scenario?

At the same time, and paradoxically, the nature of work has been dumbed down as well. The repetitive labor of the assembly line (called Taylorism after its founding father), where not entirely displaced by machine, is now reinforced by AI calculations of worker efficiency (called Digital Taylorism) through every step in the supply chain. Amazon employees report “feeling like robots,” their time so scrutinized that they urinate in bottles rather than walk to the bathroom. A related article begins, “Instead of asking ‘Are robots becoming more human?’ we need to ask ‘are humans becoming more robotic?’” It is not really a question. A recent cartoon features a robotic-looking recruiter telling several prospects, “You’ll love it here, it’s a totally dehumanizing environment.”

Another impact is also a commonplace. We are guided through the day by disembodied voices (“turn right at next stoplight”), and return home to tell the equally disembodied Alexa what to do. Digital media and a suite of evolving platforms attract us like moths to the blue screen, students with smart-phones walk across campus like zombies, families at dinner stare at their mobile devices (not even the same TV show), a new way for humans to be. While these devices are indisputably handy (and at times can save lives), they enable the aggressive among us to predate at will, and the more retiring of us to retreat to electrons for the experiences of life itself, severed from the kind of physical contact that humans have relied on for tens of thousands of years.

Artificial intelligence leaves us as well at the mercy of those who, armed with a computer-load of data about everything we have ever purchased, photographed, emailed, “liked,” or done, will sell us yet one more thing. Digital face recognition now tells department store sales staff about your buying history and financial profile before you even get to the counter. With AI diagnostics the largest corporations in the world have the ability to know us better than we know ourselves — and to act on it.

More problematic still, and reaching root principles of democracy, AI allows any entity and any country to target us, individually, with rumors and falsehoods to which it deduces we are susceptible, and that we will then carry forward like articles of faith. To wit: the late presidential election. As NPR’s The Hidden Brain reveals weekly, humans are driven by habits and emotions, and whoever can target them most effectively will win the day. AI will run impeccable political campaigns.

At this point, perhaps only as a salve, most treatments of AI turn hopeful: in return for a cornucopia of benefits humans will find a way to keep control, to form a cooperative relationship with its new (and vastly more capable) partner. A recently seen wall-size ad in the Houston airport reads “Let’s write the future with Robots that have what it takes to Collaborate!” Harvard University scholars have taken the same approach, infusing ethics into AI-related courses toward “a future that combines the speed and statistical prowess of intelligent computers” with “innate human talents. . . the frontier of AI systems.”

I am not so sanguine. We have already embraced the gods of easy information, and in turn yielded the field of thinking about it. My classes produce papers steeped in web-drawn data but short on analysis. My nephew boasts that he doesn’t have to know a thing, he can pull it up on Google — which he can, including square roots and the Fall of Rome. Neither am I sure that we humans care about yielding to AI any more than we do about yielding personal data to Google, which makes a fortune selling it to others, who make fortunes using it to target us. We have become a commodity, and we seem content with the bargain.

At road’s end, what is it that makes us human? For several centuries we thought it was the ability to reason but we are now creating systems that out-reason us hands down. (Apparently they also create excellent memes.) Which leaves the human heart. It still exists, but can it marry the machine? Inevitably, it seems, we will find out.

In the meantime, a young scientist in China, for the best of reasons, practiced gene editing recently on two human embryos susceptible to HIV — and met a storm of criticism. Senior peers convoked a meeting and pronounced against the practice, stepping away from the fire. But only for the moment. It was too soon, they said, the practice was premature; it was not the wrong thing, just the wrong time. And when the right time comes? This, inevitably, we are going to find out too.

To be sure, none of this has happened yet but it is difficult to imagine, given human ingenuity and the stakes involved, what limits may even be possible. It is also difficult to imagine, for the first time in the human experience, just how we ourselves will look, think, and act (and reproduce) one century from now. Is there a point down the road when, like bringing water to a boil, we stop being Homo sapiens and start being Homo something-else?

And will we care when it arrives?

Oliver Houck on artificial intelligence and our species.

ELI Report
Author
Laura Frederick - Environmental Law Institute
Environmental Law Institute
Current Issue
Issue
3

Artificial Intelligence: Will algorithms benefit the environment? Report points the path to beneficial uses of computerization

Artificial Intelligence is changing how our society operates. AI now helps make judicial decisions, medical diagnoses, and drives cars. AI also has the potential to revolutionize how we interact with our environment. It can help improve resource use and energy efficiency and predict extreme weather.

AI can also exacerbate existing environmental issues. For example, software manipulation of over a half million VW diesel automobiles created one of the largest environmental scandals of the past decade.

ELI’s Technology, Innovation, and the Environment Program was developed to better understand the environmental impacts and opportunities created through emerging technologies and their underlying innovation systems

When Software Rules: Rule of Law in the Age of Artificial Intelligence, a new report from program director David Rejeski, explores the interaction between AI and the environment and the need for some form of governance to ensure that it is deployed in a manner that is beneficial.

“As environmental decisionmaking becomes internalized into AI algorithms, and these algorithms increasingly learn without human input, issues of transparency and accountability must be addressed,” said Rejeski. “This is a moment of opportunity for the legal, ethical, and public policy communities to ensure positive environmental outcomes.”

“When Software Rules” offers the government, businesses, and the public a number of recommendations they can use as they begin to consider the environmental impacts of AI.

The report discusses concerns with AI systems. These include unintended consequences, such as race bias in algorithms, and the common difficulty of understanding the logic of deep-learning systems and how they come to decisions. Other sources of concern include issues like algorithms functioning on the basis of correlation without proving causality; legal liability issues; lack of privacy from data mining; and the risk of hacking.

Some form of governance over AI systems is necessary to address some of these issues, and ensure responsibility, including taking environmental considerations into account. Semi-formal governance systems may include voluntary codes outlining engagement with AI research or self-governance by institutions looking to create “ethical” AI systems. A more formal governance system may include legislation protecting consumers from faulty algorithms.

ELI provides a number of recommendations as to how AI governance can include consideration of environmental impacts. Suggestions are provided for all stakeholders: the private AI sector, programmers, governments, and the public.

For example, the private AI sector can develop research teams that include evaluation of the socio-environmental impacts of their algorithms and assemble stakeholder groups to develop guidelines for sustainable development of AI.

Programmers can increase the transparency of their algorithms so users can understand why decisions are being made, and they can increase their commitment to prioritizing environmental benefits.

Governments can ensure AI systems are powered by renewable energy to meet the energy demand of these new systems and create incentives for the development of AI that tackles environmental issues.

Members of the public can advocate for systems that promote their cultural norms and values, including environmental protection, and they can make responsible consumer choices by supporting AI companies that are transparent and environmentally conscious.

As AI governance becomes a societal expectation and is later bound by semiformal or formal contracts, the environment must be a central focus in AI discourse and subsequent laws and policy, the report concludes. ELI will continue to provide guidance on how these goals can best be achieved.

“When Software Rules: Rule of Law in the Age of Artificial Intelligence” is available for free download at eli.org/research-report/when-software-rules-rule-law-age-artificial-intelligence.

Al Moumin awardees highlight promise of peacebuilding efforts

ELI co-hosted the annual Al-Moumin Distinguished Lecture on Environmental Peacebuilding, a hallmark of the Institute’s Environmental Peacebuilding Program. Co-sponsored by the Environmental Law Institute, American University, and the United Nations Environment Programme, the lecture recognizes leading thinkers who are shaping the field of environmental peacebuilding and presents the prestigious Al-Moumin Award. The series is named for Mishkat Al-Moumin, Iraq’s first Minister of Environment, a human rights and environment lawyer, and a Visiting Scholar at ELI.

This event, now in its fifth year, honored Ken Conca and Geoff Dabelko for their outstanding contributions to the field.

Conca is a professor of international relations in the School of International Service at American University. Dabelko is a professor and director of environmental studies at the Voinovich School of Leadership and Public Affairs at Ohio University; he is also a senior advisor to the Environmental Change and Security Program of the Woodrow Wilson International Center for Scholars.

Fifteen years ago, Conca and Dabelko published Environmental Peacemaking, a rejoinder to grim scenarios foreseeing environmental change as a driver of conflict. Conca, Dabelko, and collaborators argued that, despite conflict risks, shared environmental interests and cooperative action could also be a basis for building trust, establishing shared identities, and transforming conflict into cooperation.

In their lectures, Conca and Dabelko reflected on the evolution of environmental peacebuilding research since their work began in the early days of the post-Cold War era, their seminal publication, and their long-term engagement with policymakers and practitioners applying these insights around the world.

Their work transformed, and continues to have a profound impact on, the way scholars and practitioners approach and understand the intersection of environmental protection, national security, and human rights.

Conca and Dabelko’s work is also the heart of ELI’s Environmental Peacebuilding Program: As the world experiences increasing pressures on its natural resources and climate, countries must learn to peacefully resolve resource disputes and make the environment a reason for cooperation rather than conflict.

Team travels to Indonesia to prep for judicial education course

Legal authorities are now available in Indonesia to enable civil society and the government to file claims to hold responsible parties liable for damages and the restoration of natural resources.

Through an ELI workshop and curriculum developed in conjunction with the Indonesian Center for Environmental Law and others, judges will learn best practices and methods for implementing new legal processes, including environmental damage valuation and restoration and compensation, tailored to the specific needs of the host country.

The goal is to promote environmental accountability through judicial enforcement. Ultimately, the benefits will include reduced deforestation and greenhouse gas emissions, as well as improved biodiversity and quality of life for vulnerable communities.

ELI recently traveled to Indonesia to help prepare for the week-long workshop to be held this summer. Staff met with various local stakeholders to gain background on topics like injury quantification, restoration and compensation, and settlement. ELI was also able to hear from judges which topics are most important to cover.

ELI staff held focus groups with ICEL as well as the Ministry of Environment and Forestry and Ministry of Justice and Human Rights, using an oil spill case to discuss valuation, settlement, and transboundary issues.

ELI and ICEL also held focus group discussions with the Supreme Court of Indonesia’s Environmental Working Group and Center for Training and Legal Research. The discussion included a presentation on the needs assessment by ICEL and a presentation on the comparative study of valuation, compensation, and restoration practice in several countries.

ELI’s judicial education program is a hallmark of the Institute’s work. With in-depth consultations, custom design of programs to meet the specific needs of the particular jurisdiction, and success in creating institutional capacity, the lessons learned continue to be applied after the education is completed. Since 1991, ELI has developed, presented, and participated in more than 40 workshops on critical topics in environmental law for more than 2,000 judges from 27 countries.

ELI met with a local NGO and members of the government to prepare for workshop on judicial enforcement of environmental laws.

Field Notes: Water summit showcases ELI legal expertise

ELI President Scott Fulton and Director of ELI’s Judicial Education Program Alejandra Rabasa traveled to Brazil to participate in the World Water Forum. The forum is the world’s biggest water-related event and is organized by the World Water Council, an international organization that brings together all those interested in the theme of water. Supreme Court justices from over 50 counties were in attendance to shine a light on the importance of rule of law in advancing water quality goals.

ELI hosted a day-long conference on Environmental Law In Practice in Detroit. The conference presented a spectrum of emerging legal issues with a focus on environmental justice. It introduced a wide-ranging exploration of career opportunities in the EJ field. This event featured environmental law experts on panels including Careers in Environmental Justice, Energy & Climate Justice, Water Access and Affordability, and Urban Air Quality.

Agustin V. Arbulu, executive director of the Michigan Department of Civil Rights, delivered opening remarks. Keynote addresses were given by Mustafa Santiago Ali, senior vice president of climate, environmental justice and community revitalization, Hip Hop Caucus, and Charles Lee, senior policy advisor, EPA Office of Environmental Justice.

Members of the public came together with lawyers, students, academics, civil rights and social justice advocates and activists, and community groups to discuss pressing issues.

The Conference was co-sponsored by Wayne State University Law School’s Transnational Environmental Law Clinic and Environmental Law Society, University of Chicago Law School’s Abrams Environmental Law Clinic, the American Bar Association’s Environmental Justice Committee of the Section of Civil Rights and Social Justice, and the Great Lakes Environmental Law Center.

Director of the Ocean Program Xiao Recio-Blanco moderated a webinar on Current Developments on U.S. Fisheries Policy. The Trump administration’s approach to fisheries management seems to constitute a significant policymaking shift. Recent decisions such as extending the Gulf of Mexico season for red snapper or overturning a decision by the Atlantic States Marine Fisheries Commission that would have cut New Jersey’s recreational quota for summer flounder seem to go against NOAA’s traditional approach of situating scientific information at the center of fisheries decisionmaking.

The webinar discussed these and other recent developments and assessed the direction U.S. fisheries policymaking may take in the future.

ELI and the China Environmental Protection Foundation held the first training session to build the capacity of public interest groups and prosecutors in China since receiving its temporary registration for an environmental protection-related project from China’s Ministry of Environmental Protection and the Beijing Bureau of Public Security.

The session was held at Tianjin University Law School. A total of 53 participants — comprising representatives from public interest groups, environmental courts, prosecutors, and environmental protection bureaus — attended from 16 provinces, autonomous regions, and cities.

Report on perils, promise of artificial intelligence.