I’m standing in what is soon to be the center of the world,
or is perhaps just a very large room on the seventh floor of a
gleaming tower in downtown Toronto. Showing me around is
Jordan Jacobs, who cofounded this place: the nascent Vector
Institute, which opens its doors this fall and which is aiming to
become the global epicenter of artificial intelligence.
We’re in Toronto because Geoffrey Hinton is in Toronto, and
Hinton is the father of “deep learning,” the technique behind the
current excitement about AI. “In 30 years we’re going to look
back and say Geoff is Einstein—of AI, deep learning, the thing
that we’re calling AI,” Jacobs says. Of the AI researchers at the
top of the field of deep learning, Hinton has more citations than
the next three combined. His students and postdocs have gone
on to run the AI labs at Apple, Facebook, and OpenAI; Hinton
himself is a lead scientist on the Google Brain AI team. In fact,
nearly every achievement in the last decade of AI—in translation, speech recognition, image recognition, and game playing—
traces in some way back to Hinton’s work.
The Vector Institute, this monument to the ascent of
Hinton’s ideas, is a research center where companies from
around the U.S. and Canada—like Google, and Uber, and
Nvidia—will sponsor efforts to commercialize AI technologies.
Money has poured in faster than Jacobs could ask for it; two
of his cofounders surveyed companies in the Toronto area, and
the demand for AI experts ended up being 10 times what Canada produces every year. Vector is in a sense ground zero for
the now-worldwide attempt to mobilize around deep learning:
to cash in on the technique, to teach it, to refine and apply it.
Data centers are being built, towers are being filled with startups, a whole generation of students is going into the field.
The impression you get standing on the Vector floor, bare
and echoey and about to be filled, is that you’re at the begin-
ning of something. But the peculiar thing about deep learn-
ing is just how old its key ideas are. Hinton’s breakthrough
paper, with colleagues David Rumelhart and Ronald Williams,
was published in 1986. The paper elaborated on a technique
called backpropagation, or backprop for short. Backprop, in the
words of Jon Cohen, a computational psychologist at Princeton,
is “what all of deep learning is based on—literally everything.”
When you boil it down, AI today is deep learning, and deep
learning is backprop—which is amazing, considering that
backprop is more than 30 years old. It’s worth understand-
ing how that happened—how a technique could lie in wait
for so long and then cause such an explosion—because once
you understand the story of backprop, you’ll start to under-
stand the current moment in AI, and in particular the fact
that maybe we’re not actually at the beginning of a revolution.
Maybe we’re at the end of one.
Self-driving cars, computers that win
Go championships, and just about
every other AI advance you’ve heard of
all depend on a breakthrough that’s
three decades old. Keeping up the
pace of progress will require
confronting AI’s serious limitations.
By James Somers
This publication from the mid-1980s showed
how to train a neural network with many layers.
It set the stage for this decade’s progress in AI.