Why is it hard for algorithm designers or
data scientists to account for bias and
Take a work environment that’s hostile to women. Suppose that you define
success as the fact that someone holds
a job for two to three years and gets a
promotion. Then your predictor—based
on the historical data—will accurately predict that it’s not a good idea to hire women.
What’s interesting here is that we’re not
talking about historical hiring decisions.
Even if the hiring decisions were totally
unbiased, the reality—the real discrimination in the hostile environment—persists.
It’s deeper, more structural, more ingrained
and harder to overcome.
I believe the great use for machine
learning and AI will be in conjunction with
really knowledgeable people who know
history and sociology and psychology to
figure out who should be treated similarly
I’m not saying computers will never
be able to do it, but I don’t see it now.
How do you know when you have the
right model, and when it’s capturing what
really happened in society? You need to
have an understanding of what you’re talking about. There’s the famous saying “All
models are wrong and some are useful.”
—as told to Will Knight
How to Root Out
Hidden Biases in AI
Algorithms are making life-changing decisions like
denying parole or granting loans. Cynthia Dwork, a
computer scientist at Harvard, is developing ways of
making sure the machines are operating fairly.
MIT TECHNOLOGY REVIEW
VOL. ;;; | NO. ;