we’re already beyond the start,” he says, “but
hopefully we can make significant advances
in security before we’re too far in.”
Nonetheless, he doesn’t think there
will be a purely technological solution to
fakery. Instead, he believes, we’ll have to
rely on societal ones, such as teaching kids
critical thinking by getting them to take
things like speech and debating classes.
“In speech and debate you’re competing
against another student,” he says, “and
you’re thinking about how to craft mis-
leading claims, or how to craft correct
claims that are very persuasive.” He may
well be right, but his conclusion that tech-
nology can’t cure the fake-news problem is
not one many will want to hear.
Martin Giles is MIT Technology Review’s
San Francisco bureau chief.
This cat-and-mouse game will play
out in cybersecurity, too. Researchers are
already highlighting the risk of “black box”
attacks, in which GANs are used to figure
out the machine-learning models with
which plenty of security programs spot
malware. Having divined how a defender’s
algorithm works, an attacker can evade it
and insert rogue code. The same approach
could also be used to dodge spam filters
and other defenses.
Goodfellow is well aware of the dan-
gers. Now heading a team at Google that’s
focused on making machine learning more
secure, he warns that the AI community
must learn the lesson of previous waves of
innovation, in which technologists treated
security and privacy as an afterthought.
By the time they woke up to the risks, the
bad guys had a significant lead. “Clearly,
Greene of the University of Pennsylvania.
This data could be shared more widely,
helping to advance research, while the real
records are tightly protected.
THE GANFATHER, PART III:
THE BAD FELLOWS
There is a darker side, however. A machine
designed to create realistic fakes is a perfect weapon for purveyors of fake news
who want to influence everything from
stock prices to elections. AI tools are
already being used to put pictures of other
people’s faces on the bodies of porn stars
and put words in the mouths of politicians. GANs didn’t create this problem,
but they’ll make it worse.
Hany Farid, who studies digital forensics at Dartmouth College, is working on
better ways to spot fake videos, such as
detecting slight changes in the color of
faces caused by inhaling and exhaling
that GANs find hard to mimic precisely.
But he warns that GANs will adapt in
turn. “We’re fundamentally in a weak
position,” says Farid.
Goodfellow’s creation can be used to imagine all sorts of things, including new interior designs.
Getting GANS to work
well can be tricky. If
there are glitches, the
results can be bizarre.