Value Judgements Live in the Numbers
Algorithms are among the most profound invisible influences in our lives. They help us find our driving destinations, the new music and fashions we like, what we want to read or view. They are also increasingly used in fundamental decisions about where we live, the schools we attend, the jobs we get, and what happens should we run afoul of the law.
An algorithm is a procedure or set of instructions to solve a problem or make a data-based predictions. Many algorithms are secret because the companies that develop them consider them proprietary.
Increasing evidence shows many algorithms incorporate the biases of the people who write them, and some are unintentionally discriminatory. A New York Times story by Claire Cain Miller reports, for example, that ad-targeting algorithms have shown ads for high paying jobs to men but not women, and ads for high interest loans are shown to people in low-income neighborhoods but not in upscale areas.
While racial discrimination in housing is illegal, a Vox story by Alvin Chang illustrates how decisions on criteria for affordable housing, and the design of algorithms used in formation and ads about housing availability, can inhibit a neighborhood’s racial integration and help keep poor neighborhoods poor.
Cynthia Dwork, a Microsoft Research computer scientist, a leading thinker on algorithm design and analysis, told the Times computer science education must stress that algorithms embody value judgments and therefore bias in the way the systems operate. “The goal of my work is to put fairness on a firm mathematical foundation, but even I have just begun to scratch the surface,” she said. “This entails finding a mathematically rigorous definition of fairness and developing computational methods—algorithms—that guarantee fairness.”
As part of its project “Machine Bias” examining algorithms, ProPublica looked at how the U.S. criminal justice system is increasingly using algorithms in predicting a defendant’s risk of future criminality. An article by Julia Angwin reports that the Wisconsin Supreme Court ruled recently that judges could use the computer generated risk scores to determine whether a defendant received jail or probation, but that the scores could not be “determinative.” The court also said pre-sentencing reports must warn judges about the limits of the algorithm’s accuracy. Wisconsin has been using the risk scores for four years, but has not independently tested them for accuracy or bias.
ProPublica obtained more than 7,000 risk scores assigned to individual defendants by the company that makes the tool used in Wisconsin. After comparing actual recidivism to the company’s predicted recidivism, Pro Publica found the scores were wrong 40 percent of the time, and that black defendants were falsely labeled future criminals at almost twice the rate of white defendants. Because the company’s proprietary risk-score formula did not have to be publicly disclosed, ProPublica was not able to examine the data or the calculations used in interpreting it.
Angwin writes that the Court’s directive to warn judges that risk scores over-predict recidivism among black defendants is a good first step in accountability. “Yet as we rapidly enter the era of automated decision making,” she wrote, “we should demand more than warning labels.”
The credit score is the only algorithm that consumers have a legal right to examine, challenge and demand that erroneous data be deleted or corrected. Those rights are spelled out in the Fair Credit Reporting Act signed by President Richard Nixon in 1970. Advocates for fairness in decision-making software say that today we need the right to examine and challenge data used to make algorithmic decisions about us.