seems likely that Watson Health will be
a leader in applying AI to health care’s
woes. If Watson has not, as of yet,
accomplished a great deal along those
lines, one big reason is that it needs
certain types of data to be “trained.”
And in many cases such data is in
very short supply or difficult to access.
That’s not a problem unique to Watson.
It’s a catch- 22 facing the entire field of
machine learning for health care.
Though the problem of missing and
inaccessible data may slow Watson
down, it may hurt IBM’s competitors
more. That’s because the best bet for
getting the data lies in close partnerships with large health-care organizations that tend to be technologically
conservative. And one thing IBM still
does very well in comparison to startups, or even giant rivals like Apple and
Google, is gain the trust of executives
and IT managers at big organizations.
The specific problems with the M.D.
Anderson project notwithstanding, IBM
has a crucial advantage. It’s getting
Watson inside a wide range of medical centers, health-care administration
groups, and life-science companies, all
of which are positioned to provide the
critical data needed to shape AI’s future
The breakup with M.D. Anderson
seemed to show IBM choking on its
own hype about Watson.
The cancer center and IBM partnered in 2012. The goal was for Watson to read data about any patient’s
symptoms, gene sequence, and pathology reports, combine it with physicians’
notes on the patient and relevant journal articles, and then help doctors come
up with diagnoses and treatments. But
IBM and M.D. Anderson both overinflated expectations for the technology.
IBM claimed in 2013 that “a new era of computing has emerged” and gave Forbes
the impression that Watson “now tackles clinical trials” and would be in use with
patients in just a matter of months. In 2015, the Washington Post quoted an IBM
Watson manager describing how Watson was busy establishing a “collective intel-
ligence model between machine and man.” The Post said that the computer sys-
tem was “training alongside doctors to do what they can’t.”
In February of this year, the University of Texas, which runs M.D. Anderson,
announced it had shuttered the project, leaving the medical center out $39 million
in payments to IBM—for a project originally contracted at $2.4 million. After four
years it had not produced a tool for use with patients that was ready to go beyond
pilot tests. M.D. Anderson wouldn’t comment to me about Watson specifically, but
it appears that the problems stemmed mainly from internal struggles over how the
project was managed and funded.
That’s not to say IBM has no troubles with Watson. Indeed, they’re larger than
what any one implementation reveals.
To understand what’s slowing the progress, you have to understand how
machine-learning systems like Watson are trained. Watson “learns” by continually
rejiggering its internal processing routines in order to produce the highest possible
percentage of correct answers on some set of problems, such as which radiological
images reveal cancer. The correct answers have to be already known, so that the
“Health care has been an embarrassingly late
adopter of technology,” says Manish Kohli, a
physician and health-care informatics expert
with the Cleveland Clinic.