how far can a person throw a Frisbee? Can
a person eat a Frisbee? Roughly how many
people play Frisbee at once? Can a three-month-old person play Frisbee? Is today’s
weather suitable for playing Frisbee?
Computers that can label images like
“people playing Frisbee in a park” have
no chance of answering those questions.
Besides the fact that they can only label
more images and cannot answer questions at all, they have no idea what a person is, that parks are usually outside, that
people have ages, that weather is anything more than how it makes a photo
This does not mean that these systems are useless; they are of great value
to search engines. But here is what goes
wrong. People hear that some robot or
some AI system has performed some task.
They then generalize from that performance to a competence that a person performing the same task could be expected
to have. And they apply that generalization to the robot or AI system.
Today’s robots and AI systems are
incredibly narrow in what they can do.
Human-style generalizations do not apply.
4. Suitcase words
Marvin Minsky called words that carry
a variety of meanings “suitcase words.”
“Learning” is a powerful suitcase word;
it can refer to so many di;erent types of
experience. Learning to use chopsticks is
a very di;erent experience from learning
the tune of a new song. And learning to
write code is a very di;erent experience
from learning your way around a city.
When people hear that machine learning is making great strides in some new
domain, they tend to use as a mental
model the way in which a person would
learn that new domain. However, machine
learning is very brittle, and it requires lots
of preparation by human researchers or
engineers, special-purpose coding, special-purpose sets of training data, and a
custom learning structure for each new
problem domain. Today’s machine learning is not at all the sponge-like learning
that humans engage in, making rapid
progress in a new domain without having
to be surgically altered or purpose-built.
Likewise, when people hear that
a computer can beat the world chess
champion (in 1997) or one of the world’s
best Go players (in 2016), they tend to
think that it is “playing” the game just
as a human would. Of course, in reality those programs had no idea what a
game actually was, or even that they were
playing. They were also much less adaptable. When humans play a game, a small
change in rules does not throw them o;.
Not so for AlphaGo or Deep Blue.
Suitcase words mislead people about
how well machines are doing at tasks that
people can do. That is partly because AI
researchers—and, worse, their institutional press o;ces—are eager to claim
progress in an instance of a suitcase
concept. The important phrase here is
“an instance.” That detail soon gets lost.
Headlines trumpet the suitcase word,
and warp the general understanding of
where AI is and how close it is to accomplishing more.
Many people are su;ering from a severe
case of “exponentialism.”
Everyone has some idea about Moore’s
Law, which suggests that computers get
better and better on a clockwork-like
schedule. What Gordon Moore actually
said was that the number of components
that could fit on a microchip would double
every year. That held true for 50 years,
although the time constant for doubling
gradually lengthened from one year to
over two years, and the pattern is coming
to an end.
Doubling the components on a chip
has made computers continually double in speed. And it has led to memory
chips that quadruple in capacity every
two years. It has also led to digital cameras that have better and better resolution, and LCD screens with exponentially
The reason Moore’s Law worked is
that it applied to a digital abstraction of
a true-or-false question. In any given circuit, is there an electrical charge or voltage there or not? The answer remains
clear as chip components get smaller and
smaller—until a physical limit intervenes,
and we get down to components with so
few electrons that quantum e;ects start to
dominate. That is where we are now with
our silicon-based chip technology.
When people are su;ering from exponentialism, they may think that the exponentials they use to justify an argument
are going to continue apace. But Moore’s
Law and other seemingly exponential laws
can fail because they were not truly exponential in the first place.