News headlines might not be the only
things that are fake in the future. Powerful machine-learning techniques are making it increasingly easy to manipulate or
generate realistic video and audio, and to
impersonate anyone you want with amazing accuracy.
A smartphone app called FaceApp,
released recently by a company based in
Russia, can automatically modify some-
one’s face to add a smile, add or subtract
years, or swap genders. The app can also
apply “beautifying” effects that include
smoothing out wrinkles and, more contro-
versially, lightening the skin. And a com-
pany called Lyrebird, which was spun out
of the University of Montreal, has dem-
onstrated technology that it says can be
used to impersonate someone’s voice. The
company demonstrated it with clips in
which Barack Obama, Donald Trump,
and Hillary Clinton apparently endorsed
Home Security Assistant
TO MARKET A new smart-home assistant and security monitor can tell the
difference between specific adults and spot kids and pets, and
send you smartphone alerts about what they’re up to. Light-
house uses two cameras, including a 3-D time-of-flight camera
that can see how far away an object is and distinguish objects in
the foreground from those in the background. If the device finds
something that may be interesting—say, your kids walking into
the living room at 11 P.M., or an unknown person in the house while
you’re out—it will send the information to a remote server, which
analyzes the data and works with a Lighthouse app running on
your smartphone to figure out what to do with it. —Rachel Metz
the technology. These are just two exam-
ples of how the most powerful AI algo-
rithms can be used for generating content
rather than simply analyzing data.
Powerful graphics hardware and software, as well as new video-capture technologies, are also driving this trend. Last
year researchers at Stanford University
demonstrated a face-swapping program
called Face2Face. It manipulates video
footage so that a person’s facial expressions match those of someone being
tracked using a depth-sensing camera.
The result is often eerily realistic.
Both FaceApp and Lyrebird use deep
generative convolutional networks to
enable such tricks. They are applying an
emerging technique that lets algorithms
go beyond just learning to classify things
and generate plausible data of their own
by using very large, or deep, neural networks. Such networks are normally fed
training data and tweaked so that they
respond in the desired way to new input.
For example, they can be trained to recognize faces or objects in images with amazing accuracy. But the same networks can
then be made to generate their own data.
For instance, such a network can generate images from scratch that look almost
like the real thing. In the future, using
the same techniques, it may become a lot
easier to manipulate video, too.
Real or Fake? AI Is Making
It Very Hard to Know
Thanks to machine learning, it’s becoming easy to generate realistic video, and
to impersonate someone.