RISKS-LIST: Risks-Forum Digest Tuesday 29 September 2015 Volume 28 : Issue 97
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy
***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
The current issue can be found at
EPA v VW cheatware, AI & “machine learning” (Henry Baker)
Date: Sat, 26 Sep 2015 06:46:36 -0700
From: Henry Baker <email@example.com>
Subject: EPA v VW cheatware, AI & “machine learning”
The tech world is very excited, but also frightened, about AI & machine
learning these days; we worry about AI/machine learning algorithms replacing
doctors, lawyers, teachers, taxi drivers.
Perhaps one of the most straightforward applications of AI & machine
learning today would be a computer that “learns” how to control the
emissions of a vehicle engine so that it can pass the EPA emissions tests.
Consider the following conceptual model: a computer with a bunch (hundreds?)
of sensors and a bunch of actuators (tens?) that watches over a diesel
engine while it is being driven through a standard EPA emissions test.
The computer can sense perhaps air temperature, humidity, engine speed,
engine load, engine temperature, etc., and can control perhaps the air flow,
the fuel flow, the flow of Adblue (aka DEF/ISO 22241), etc. Sensors don’t
cost very much, so there may also be sensors for the engine hood/bonnet
being open, the position of the steering wheel, etc.
We now put this system through hundreds of thousand of miles of “learning”
(millions of miles if the testing & learning can be virtualized & run in
parallel), so that the AI/machine learning algorithm learns to optimize
inputs like fuel and Adblue while still meeting EPA testing limits.
I can guarantee you that this AI/machine learning algorithm will quickly
notice that the best way to optimize for the EPA test is to “cheat” – i.e.,
to notice that when the hood/bonnet is open and the steering wheel is
straight ahead, this would be a good time to optimize NOx and other
emissions, while during other conditions – hood/bonnet closed and steering
wheel twisting back & forth (perhaps a curving country lane) – emissions
aren’t so important relative to performance.
(Perhaps someone – Google might be in the best position with their work on
autonomous robots and its expertise in “machine learning” for their
self-driving cars – is already working on such AI/machine learning
experiments for engine optimization; I’d be interested in hearing about them
if anyone can send me links.)
So is this AI/machine learning program “unethical” wrt the EPA tests ?
Should it be fined or go to jail ?
This is no longer idle speculation, as these AI/machine learning programs
are “recognizing” speech, gaits, faces, writing styles, etc. Are they also
As automobiles become more complex, and as machine learning algorithms
become more sophisticated, engine optimization computers may no longer be
“programmed” by humans using coding techniques, but will be “taught” by
following a long sequence of example situations and “learning” the correct
The DMCA may no longer be relevant to such computers, because *there is no
source code* to look at, and indeed, the *binary code* may itself simply be
a huge pile of random-looking floating point numbers in a *neural network*.
The *only* way to check such a system will be through exhaustive (!)
behavioral testing, as there won’t be any source code to logically check for
“cheats” and “defeats”.
I’m not trying to excuse the VW management that has already admitted to
“cheating” on the EPA tests, but as a computer scientist, I’m not so sure
where we go forward from here. We have terrific new opportunities with
electric and self-driving cars, so “optimizing” the government regulation of
diesel engines may simply be re-arranging the deck chairs on the Titanic….