A revolutionary AI technique is about to
transform the self-driving car.
When the Google self-driving-car project
began about a decade ago, the company
made a strategic decision to build its technology on expensive lidar and detailed
mapping. Even today, Google’s self-driving technology still relies on those
two pillars. While that approach is great
up to a point—we have good algorithms
for using lidar and camera data to localize a car on the map—it’s still not good
enough. Driving on complicated, ever-changing streets involves perception and
decision-making skills that are inherently
uncertain (see “Your Driverless Ride Is
Arriving,” p. 34).
Now an artificial-intelligence technology called deep learning is being used to
address the problem. Rather than using
the old method of hand-coded algorithms,
we can now use systems that program
themselves by learning from examples of
how a system ought to behave in response
to an input. Deep learning is now the best
approach to most perception tasks, as well
as to many low-level control tasks.
A self-driving car needs a perception
system to sense things that are moving
(cars, people) as well as things that aren’t
(lampposts, curbs). Self-driving vehicles
detect dynamic objects using sensors such
as cameras, laser scanners, and radar. Of
these three, cameras are the cheapest, but
they’re also used the least because it’s hard
to translate images into detected objects.
Using deep learning, we’re seeing dramatic improvements in the car’s ability to
understand and make use of such images.
We’re also seeing significant gains
from something called “multitask deep
learning,” in which a system trained
simultaneously to detect lane markings,
cars, and pedestrians does better than
three separate systems trained in isola-
tion—since the single network can share
information among the separate tasks.
Instead of relying entirely on a pre-computed map, the car can use the map
as one of many data streams, combining
it with sensor inputs to help it make decisions. (A neural network that knows from
map data where crosswalks are, for example, can more accurately detect pedestrians trying to cross than one that relies
solely on images.)
Deep learning can also alleviate one of
the biggest issues identified by many who
have ridden in a self-driving car—a “jerky”
feel to the driving style, which sometimes
leads to motion sickness. But a car trained
using examples of humans driving can
offer a ride that feels more natural.
It’s still early. But just as deep learning did with image search and voice recognition, it is likely to forever change the
course of self-driving cars.
Carol Reiley is the cofounder of Drive.ai.
Manufacturing fell behind the information
revolution. That’s about to change.
Since 1994, the number of manufacturing
jobs in the U.S. has dropped by almost 30
percent. The common explanation has
been that domestic factories need fewer
workers because they’ve become much
But that’s all wrong. The problem isn’t
that we’re too productive. The problem
is we’re still not productive enough (see
“Learning to Prosper in a Factory Town,”
Yes, it’s true that since 1994 manufac-
turing labor productivity has doubled. But
if you measure something called “multi- AN