We are surrounded by hysteria about the
future of artificial intelligence and robotics—hysteria about how powerful they
will become, how quickly, and what they
will do to jobs.
I recently saw a story in Market Watch
that said robots will take half of today’s
jobs in 10 to 20 years. It even had a
graphic to prove the numbers.
The claims are ludicrous. (I try to
maintain professional language, but sometimes …) For instance, the story appears
to say that we will go from one million
grounds and maintenance workers in
the U.S. to only 50,000 in 10 to 20 years,
because robots will take over those jobs.
How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of
robots working in this arena? Zero. Similar stories apply to all the other categories where it is suggested that we will see
the end of more than 90 percent of jobs
that currently require physical presence
at some particular site.
Mistaken predictions lead to fears
of things that are not going to happen,
whether it’s the wide-scale destruction
of jobs, the Singularity, or the advent of
AI that has values different from ours
and might try to destroy us. We need to
push back on these mistakes. But why are
people making them? I see seven common reasons.
1. Overestimating and underestimating
Roy Amara was a cofounder of the Institute for the Future, in Palo Alto, the intel- J O
The Seven Deadly Sins
of AI Predictions
Mistaken extrapolations, limited imagination, and other common mistakes
that distract us from thinking more productively about the future.
By Rodney Brooks
lectual heart of Silicon Valley. He is best
known for his adage now referred to as
We tend to overestimate the e;ect of a
technology in the short run and underestimate the e;ect in the long run.
There is a lot wrapped up in these 21
words. An optimist can read it one way,
and a pessimist can read it another.
A great example of the two sides of
Amara’s Law is the U.S. Global Positioning System. Starting in 1978, a constellation of 24 satellites (now 31 including
spares) were placed in orbit. The goal of
GPS was to allow precise delivery of munitions by the U.S. military. But the program
was nearly canceled again and again in
the 1980s. The first operational use for its
intended purpose was in 1991 during Desert Storm; it took several more successes
for the military to accept its utility.
Today GPS is in what Amara would
call the long term, and the ways it is used
were unimagined at first. My Series 2
Apple Watch uses GPS while I am out
running, recording my location accu-
rately enough to see which side of the
street I run along. The tiny size and price
of the receiver would have been incom-
prehensible to the early GPS engineers.
The technology synchronizes physics
experiments across the globe and plays
an intimate role in synchronizing the U. S.
electrical grid and keeping it running. It
even allows the high-frequency traders
who really control the stock market to
mostly avoid disastrous timing errors.
It is used by all our airplanes, large and
small, to navigate, and it is used to track
people out of prison on parole. It deter-
mines which seed variant will be planted
in which part of many fields across the
globe. It tracks fleets of trucks and reports
on driver performance.
GPS started out with one goal, but it
was a hard slog to get it working as well as
was originally expected. Now it has seeped
into so many aspects of our lives that we
would not just be lost if it went away; we
would be cold, hungry, and quite possibly dead.
We see a similar pattern with other
technologies over the last 30 years. A big
promise up front, disappointment, and
then slowly growing confidence in results
that exceed the original expectations. This
is true of computation, genome sequencing, solar power, wind power, and even
home delivery of groceries.
AI has been overestimated again and
again, in the 1960s, in the 1980s, and I
believe again now, but its prospects for the
long term are also probably being underestimated. The question is: How long is
the long term The next six errors help
explain why the time scale is being grossly
underestimated for the future of AI.
2. Imagining magic
When I was a teenager, Arthur C. Clarke
was one of the “big three” science fiction
writers, along with Robert Heinlein and
Isaac Asimov. But Clarke was also an
inventor, a science writer, and a futurist.
Between 1962 and 1973 he formulated