Auto accidents kill more than 33,000
Americans each year, and companies
working on self-driving cars, such as
Alphabet and Ford, say their technology can slash that number by removing
human liabilities (see “Your Driverless
Ride Is Arriving,” page 34). But Christopher Hart, chairman of the National
Transportation Safety Board, says that
humans can’t be fully removed from control. He told MIT Technology Review that
autonomous vehicles will indeed be much
safer but will still need humans as copilots.
How optimistic are you that self-driving cars
will cut into the auto accident death toll?
I’m very optimistic. For decades we’ve
been looking at ways to mitigate injury
when you have a crash. We’ve got seat
belts, we’ve got air bags, we’ve got more
robust [auto body] structures. Right now,
we have the opportunity to prevent the
crash altogether. And that’s going to save
tens of thousands of lives.
Autopilot systems can also create new
dangers. The NTSB has said that air pilots’
overreliance on automation has caused
crashes. Do you worry about this phenomenon being a problem for cars, too?
The ideal scenario that I talked about,
saving the tens of thousands of lives a
year, assumes complete automation with
no human engagement whatsoever. I’m
not confident that we will ever reach
that point. I don’t see the ideal of complete automation coming anytime soon.
Some people just like to drive. Some people don’t trust the automation, so they’re
going to want to drive. [And] there’s no
software designer in the world that’s ever
going to be smart enough to anticipate all
the potential circumstances this software
is going to encounter.
The challenge is that when you have
not-so-complete automation, with still
significant human engagement, compla-
cency becomes an issue. That’s when lack
of skills becomes the issue. So our chal- M A
Policing Driverless Cars
Christopher Hart, who heads the National Transportation Safety Board,
thinks we may never reach full automation on U.S. roads.
lenge is: how do we handle what is probably going to be a long-term scenario of still
some human engagement in this largely
Some people say that self-driving cars
will have to make ethical decisions—for
example, deciding whom to harm when a
collision is unavoidable. Is this a genuine
I can give you an example I’ve seen mentioned in several places. My automated
car is confronted by an 80,000-pound
truck in my lane. Now the car has to
decide whether to run into this truck and
kill me, the driver, or to go up on the sidewalk and kill 15 pedestrians. That would
[have to] be put into the system. Protect
occupants or protect other people? That
to me is going to take a federal government response to address. Those kinds
of ethical choices will be inevitable. In
addition to just ethical choices—what if
the system fails? Is the system going to
fail in a way that minimizes [harm] to the
public, other cars, bicyclists? The federal
government is going to be involved.
What might that look like?
The Federal Aviation Administration has
a scheme whereby if something is more
likely than one in a billion to happen you
need a fail-safe. Unless you can show
that the wing spar failing—the wing
coming off—is less than one in a billion,
it’s “likely” to happen. Then you need to
have an alternate load path [a fallback
structure to bear the plane’s weight].
That same process is going to have to
occur with cars. I think the government
is going to have to say, “You need to show
me a less-than-X likelihood of failure,
or you need to show me a fail-safe that
ensures that this failure won’t kill people.” I think setting the limit is going to
be in the federal government domain, not
state government. —Andrew Rosenblum