software bias

Uncovering discrimination in machine-learning software

It’s no secret that machine learning-based algorithms can be problematic and even inject or boost bias in decision-making processes. Software used by courts for sentencing decisions has been shown to make harsher recommendations for defendants of color.

Sainyam Galhotra, Alexandra Meliou and Yuriy Brun, assistant professors at the University of Massachusetts at Amherst, have found a new technique to automatically test software for discrimination.

The reason algorithms deliver biased outcomes, they say, is the data the machine-learning algorithms are using are full of bias. Many developers and end users simply aren’t aware of this bias, Brun said.

Their technique Themis tests algorithms and measures discrimination in outcomes. It will run the algorithm many times, varying the inputs to see if the decision the algorithm makes is biased. Users can find out, for example, if changing a person’s race affects whether the software recommends bail for a suspect or a lengthy sentence for a criminal.

"Themis can identify bias in software whether that bias is intentional or unintentional and can be applied to software that relies on machine learning, which can inject biases from data without the developers’ knowledge,” Brun said.

Themis was tested on software in GitHub's public repository, which included applications meant to be "fairness aware," meaning specifically designed to avoid bias. But even this fairness-aware code can show bias.

“Once you train it on some data that has some biases in it, then the software overall becomes biased,” Brun said.

Themis is free to use and available online. The source code for the algorithm is not needed, so a court that uses sentencing software or a police department working with predictive policing software can run Themis to see if bias exists.

So far, Themis only works with simple inputs like numbers. A next step is to test more complex inputs like pictures so researchers can detect bias in facial recognition technology.

Themis is a starting point for removing bias, the researchers said. When biased software is detected, it can be sent back to the developer who can get a better "understanding of what the software is doing and where the discrimination bug is.”

Editor's note: This article was changed Sept. 13 to include the name of the first author of the paper on fairness testing, Sainyam Galhotra. We regret the error.

About the Author

Matt Leonard is a reporter/producer at GCN.

Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.

Leonard can be contacted at mleonard@gcn.com or follow him on Twitter @Matt_Lnrd.

Click here for previous articles by Leonard.


inside gcn

  • NGA tries paying by the sprint

    State strategies for smarter IT procurement

Reader Comments

Mon, Aug 28, 2017 phil arbeit USA

How do we know that the testing software is not itself biased? What's its baseline for bias testing?

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group