When Cars Lie

Wednesday, June 21, 2017

For months, Volkswagen has been reeling from an emissions manipulation scandal affecting over one-half million U.S. vehicles and costing the company more than $20 billion in reparations. The financial and reputational damage has now invaded the VW supply chain, with Bosch, the company who developed the emissions control programs for VW, agreeing to pay customers over $300 million in damages. Who is next? Probably Fiat/Chrysler and Daimler, both under investigation for evading diesel emissions rules.

One important lesson from “dieselgate” is that deceit is programmable. Also troubling is that the lying of the machines, and by implication, their human programmers and managers, can remain undetected for years and propagate through hundreds of thousands of vehicles. This is a reminder of what happens when powerful technologies become both ubiquitous and invisible.

The Volkswagen Jetta TDI is one model that used defeat devices to sidestep emissions standards (Photo: Flickr, M 93)

The program used by the automakers was not that complex—an example of rule-based programming where the computer detected an attempt to measure the vehicle’s emissions and changed engine parameters. This deception was relatively easy to achieve since the procedures used for emission testing are well-known and the software can be “trained” to detect and defeat them. Eventually, the researchers and regulators outsmarted the program. But the engineers at VW could have done better. We know from studies of lying that most of us are not good at it—it takes practice—and that means that a machine could be trained to improve its deception. Unlike humans, a computer program has no body language that would give away its duplicitous intentions; no sweating, stammering, or furtive glances. The program could have tested a battery of subtle strategies to deceive the regulators, optimizing those that worked best and even transferring the best solutions to other cars in a network. This is an interesting thought experiment, but a bad idea.

However, in the future, artificial intelligence (AI) opens another opportunity. What if the system was capable of moral reasoning and monitored for ethical lapses rather than opportunities to lie? What if it effectively learned how to defeat the intention of its creators rather than deceive the enforcers, and subsequently turned itself and the other vehicles into the authorities? iRobot meets EPA.

Google has begun to use artificial intelligence in prototypes for self-driving cars. (Photo: Marc van der Chijs)

The VW emissions scandal was based on old technology, not where AI is heading. The next AI frontier involves the creation of what the Defense Department’s advanced technology arm, DARPA, calls lifelong learning machines that improve their reasoning on the fly through experience. This would allow machines to stop being machines and think and reason in novel situations just as humans do—potentially questioning their behavior and correcting misjudgments.

This may sound scary, but the implications are worth considering. Should we trust machines over humans when it comes to environmental decisionmaking? That may be a viable option if the ethical judgments and motivations of our environmental leaders are suspect and are no longer based on sound scientific reasoning. Advances in AI will move us from behind the steering wheel. Let’s hope the next generation of vehicles not only make the right turns, but also the right environmental decisions.

David Rejeski directs the Technology, Innovation, and the Environment Project at the Environmental Law institute in Washington, DC.