up in awe-inspiring detail—cell membranes, mitochondria,
neurotransmitter-filled vesicles crowding at the synapses. It’s
like zooming in on a fractal: the closer you look, the more complexity you see.
Slicing is hardly the end of the story. Even as the scans come
pouring out of the microscope—“You’re sort of making a movie
where each slice is deeper,” says Lichtman—they are forwarded
to a team led by Harvard computer scientist Hanspeter Pfister.
“Our role is to take the images and extract as much information
as we can,” says Pfister.
That means reconstructing all those three-dimensional
neurons—with all their organelles, synapses, and other features—from a stack of 2-D slices. Humans could do it with
paper and pencil, but that would be hopelessly slow, says Pfister. So he and his team have trained neural networks to track
the real neurons. “They perform a lot better than all the other
methods we’ve used,” he says.
Each neuron, no matter its size, puts out a forest of tendrils known as dendrites, and each has another long, thin fiber
called an axon for transmitting nerve impulses over long distances—completely across the brain, in extreme cases, or even
all the way down the spinal cord. But by mapping a cubic millimeter as MICrONS is doing, researchers can follow most of
these fibers from beginning to end and thus see a complete
neural circuit. “I think we’ll discover things,” Pfister says.
“Probably structures we never suspected, and completely new
insights into the wiring.”
Among the questions the MICrONS teams hope to
begin answering: What are the brain’s algorithms?
How do all those neural circuits actually work? And
in particular, what is all that feedback doing?
Many of today’s AI applications don’t use feed-
back. Electronic signals in most neural networks
cascade from one layer of nodes to the next, but
generally not backward. (Don’t be thrown by the
term “backpropagation,” which is a way to train neural net-
works.) That’s not a hard-and-fast rule: “recurrent” neural
networks do have connections that go backward, which helps
them deal with inputs that change with time. But none of
them use feedback on anything like the brain’s scale. In one
well-studied part of the visual cortex, says Tai Sing Lee at
Carnegie Mellon, “only 5 to 10 percent of the synapses are lis-
tening to input from the eyes.” The rest are listening to feed-
back from higher levels in the brain.
There are two broad theories about what the feedback is for,
says Cox, and “one is the notion that the brain is constantly trying to predict its own inputs.” While the sensory cortex is processing this frame of the movie, so to speak, the higher levels of
the brain are trying to anticipate the next frame, and passing
their best guesses back down through the feedback fibers.
This the only way the brain can deal with a fast-moving
environment. “Neurons are really slow,” Cox says. “It can take
up to 170 to 200 milliseconds to go from light hitting the retina through all the stages of processing up to the level of conscious perception. In that time, Serena Williams’s tennis serve
travels nine meters.” So anyone who manages to return that
serve must be swinging her racket on the basis of prediction.
And if you’re constantly trying to predict the future, Cox
says, “then when the real future arrives, you can adjust to
make your next prediction better.” That meshes well with the
second major theory being explored: that the brain’s feedback connections are there to guide learning. Indeed, computer simulations show that a struggle for improvement forces
any system to build better and better models of the world.
For example, Cox says, “you have to figure out how a face will
appear if it turns.” And that, he says, may turn out to be a critical piece of the one-shot-learning puzzle.
“When my daughter first saw a dog,” says Cox, “she didn’t
have to learn about how shadows work, or how light bounces
off surfaces.” She had already built up a rich reservoir of experience about such things, just from living in the world. “So
Opposite page, top row: Scans of brain slices are stitched together
by an algorithm. Middle row: A “multibeam field of view,” made of 61
images taken by the electron microscope, is seen at left; 14 multibeam
fields of view are combined at right. Bottom row: Scans are assembled
into a cube and colorized.