Last summer the Internet was overrun by
psychedelic images: swirling skies sprouting dog faces, Van Gogh masterpieces
embellished with dozens of staring eyes.
By running their image-recognition algorithms in reverse, Google researchers had
found they could generate images that
some call art. At an auction in February,
a print made using their “DeepDream”
software fetched $8,000.
But DeepDream images are limited,
says Douglas Eck, a researcher in Google’s
main artificial-intelligence research
group, Google Brain. Now a new Google
project called Magenta is aimed at making creative software that can generate
more sophisticated artworks using music,
video, and text.
Magenta will draw on Google’s latest
research into artificial neural networks,
which underpin what CEO Sundar Pichai
calls his company’s “AI first” strategy. Eck
says he wants to help artists, creative professionals, and just about anyone else
experiment and even collaborate with
creative software capable of generating
“As a writer you could be getting from
a computer a handful of partially written
ideas that you can then run with,” says
Eck. “Or you’re an architect and the com-
puter generates a few directions for a proj-
ect you didn’t think of.”
Those scenarios are a ways off. But
Project Magenta collaborator Adam Rob-
erts has demonstrated prototype software
that gives a hint of how a musician might
collaborate with a creative machine. He
tapped out a handful of notes on a vir-
tual Moog synthesizer. At the click of a
mouse, the software extrapolated them
into a short tune, complete with key
changes and recurrent phrases. The soft-
ware learned to do that by analyzing a
database of nearly 4,500 pop-music tunes.
Eck thinks it learned how to make key
changes and melodic loops because it uses
a crude form of attention, loosely inspired
Computer, Write Me a Song
Google says its AI software could make creative suggestions to help musicians,
architects, and visual artists.
by human cognition, to extract useful
information from tunes it has analyzed.
Researchers at Google and elsewhere are
using such attention mechanisms to make
learning software capable of understanding complex sentences or images (see “AI’s
Unspoken Problem,” page 28).
Ideas that helped Google’s AlphaGo
software beat one of the world’s top Go
players this year could also help the quest
for creative software. AlphaGo’s design
made use of an approach called reinforcement learning, in which software picks
up new skills a little like an animal—it is
programmed to try to maximize a virtual
reward. Eck thinks reinforcement learning could make software capable of more
complex artworks. For example, the sample tunes from Magenta’s current demo
lack the kind of larger structure we expect
in a song.
Google’s project could bring more
attention and resources to a field of
research that has existed for a long time
in academia but is smaller than areas of
artificial intelligence with more obvious
business applications, says Mark Riedl,
an associate professor at Georgia Tech,
who creates software that creates stories
and video games. However, he notes that
Google’s move into creative artificial intelligence is unlikely to yield quick progress
on a question that looms over the field of
computational creativity: can a machine
ever be an artist in its own right, not just
a tool directed by a human artist?
Good human artists generally start
out emulating established artists before
developing new styles and genres of their
own, guided by an evolving artistic motivation, says Riedl. How software could
develop artistic autonomy is unclear.
“Neural networks are kind of in the imitation mode,” he says. “You can pipe in
the works of the classics and they’ll learn
patterns, but they need to learn creative
intent somewhere.” —Tom Simonite
Can a machine ever be an
artist in its own right, not just
a tool directed by a human?