Is that algorithm safe to use?
- By Matt Leonard
- Sep 17, 2018
Government leaders are experimenting with algorithms, artificial intelligence and other mathematical processes for a wide range of mission needs. Using these tools, however, requires an acknowledgement that they’re not perfect.
The “Ethics & Algorithms Toolkit” is meant to help local governments "understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them." The risk management framework was created by the Center for Government Excellence (GovEx) at Johns Hopkins University, the Civic Analytics Network at Harvard University, the city and county of San Francisco and Data Community DC.
Algorithms often make decisions based on historical data, which has demonstrated the potential to inject bias into that decision-making process.
Before using an algorithm, government officials should identify the number of citizens it will affect, how and when the data informing the algorithm is collected, whether the algorithm will make recommendations or decisions, and whether the algorithm's output can be audited.
The toolkit asks a number of questions in this vein and provides various ways to mitigate concerns that arise. For example, it describes characteristics of low-, medium- and high-risk historical data that algorithms use for training and decision-making, and offers specific ways identified issues can be solved.
GovEx Director of Data Practices Andrew Nicklin said the evaluation of these kinds of technologies has been difficult for government officials. “Government employees do not have a process or tool to evaluate how risky their algorithms are, nor how to manage those risks,” he said in a statement. “That is, until now.”
Matt Leonard is a former reporter for GCN.