says David Cox. Yes, it has gotten astonishingly good, from near-perfect facial recognition to driverless cars and world-champion
Go-playing machines. And it’s true that some
AI applications don’t even have to be programmed anymore: they’re based on architectures that allow them to learn from
Yet there is still something clumsy and
brute-force about it, says Cox, a neuroscientist at Harvard. “To build a dog detector,
you need to show the program thousands
of things that are dogs and thousands that
aren’t dogs,” he says. “My daughter only had
to see one dog”—and has happily pointed out
puppies ever since. And the knowledge that
today’s AI does manage to extract from all
that data can be oddly fragile. Add some artful static to an image—noise that a human
wouldn’t even notice—and the computer
might just mistake a dog for a dumpster.
That’s not good if people are using facial recognition for, say, security on smartphones. (See “Is AI Riding a
One-Trick Pony?” on page 28.)
To overcome such limitations, Cox and dozens of other
neuroscientists and machine-learning experts joined forces
last year for the Machine Intelligence from Cortical Networks (MICrONS) initiative: a $100 million effort to reverse-engineer the brain. It will be the neuroscience equivalent of a
moonshot, says Jacob Vogelstein, who conceived and launched
MICrONS when he was a program officer for the Intelligence
Advanced Research Projects Agency, the U.S. intelligence community’s research arm. (He is now at the venture capital firm
Camden Partners in Baltimore.) MICrONS researchers are
attempting to chart the function and structure of every detail
in a small piece of rodent cortex.
Previous pages: A rat brain in a dish and a rendering of two neurons with
spiny dendrites. This page, top: A technician observes the brain of a live rat
during a test. Bottom: After the test, the animal’s brain has been removed.
Opposite page: The brain is glued to a plate before being scanned.