robot hand holding earth (vectorfusionart/Shutterstock.com)

Holding algorithms accountable

Algorithms are increasingly being used to make decisions in the public and private sector even though they have been shown to deliver biased outcomes in some cases. Although several methods of governing algorithms have been proposed, a new report from the Information Technology and Innovation Foundation's Center for Data Innovation argued that previous proposals fall short, and it outlined a method of  “algorithmic accountability” meant to protect against undesirable outcomes.

MORE INFO

Assessing the impact of algorithms

An Algorithmic Impact Assessment could help governments avoid the potential pitfalls associated with automated decision-making, the AI Now Institute says. Read more.

Prior efforts to combat bias fall into four categories, CDI said: the algorithmic transparency or explainability mandate, the creation of a regulatory body to oversee algorithms, general regulation and just leaving algorithms alone.

Each of these proposals has faults, the report authors said. If all artificial intelligence must be explainable, for example, then that holds this technology to a higher standard than we apply to human decision making. Meanwhile, there are some kinds of algorithmic implementations that don’t need regulations. Dating apps, the report argued, could result in a bad date, but that doesn't mean they should be regulated.

For CDI, algorithmic accountability has three goals: promoting desirable or beneficial outcomes; protecting against undesirable, or harmful, outcomes; and ensuring laws that apply to human decisions can be effectively applied to algorithmic decisions. Therefore, the authors argued, a governance framework should employ a variety of controls to ensure operators can: verify an algorithm works in accordance with the operator’s intentions and identify and rectify harmful outcomes.

Transparency is one way algorithms can be made accountable. Even with "black box" algorithms, transparency would allow third parties to determine if the software is functioning as intended.

In general, an algorithm should prioritize accuracy over transparency, CDI said. And even when there is transparency, the decision-making processes for machine learning applications is often not understood by the developers themselves. But in some government use cases, like risk assessment algorithms, transparency could be beneficial.

“[R]isk-assessment algorithms, such as those used to inform sentencing decisions, may rely on many different variables in their assessments but be static and relatively straightforward, making it is easy for their operators to assess the variables involved and determine whether they are appropriate -- as well as observe how a certain data point might impact a risk score because the system is hard-coded to give that variable a particular weighting,” the report said.

The report quoted Caleb Watney, a technology policy fellow at the R Street Institute, who argued that because sunshine laws have set a precedent of transparency in the justice system,  it could be appropriate to “mandate all algorithms that influence judicial decision-making be open-source.”

The report concluded that it would be reasonable to mandate public-sector agencies go through an “impact assessment” process for any algorithms they plan to use.

New York has taken steps toward that goal. Acknowledging how algorithms are increasingly incorporated into software that makes decisions about school placements, criminal justice or the distribution of social services,  New York Mayor Bill de Blasio recently announced plans to set up a task force review the city's automated decision systems for equity, fairness and accountability.

The AI Now Institute at New York University recently released a report  on assessing the impact of algorithms that recommends a pre-acquisition review and a chance for public comment.

About the Author

Matt Leonard is a reporter/producer at GCN.

Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.

Leonard can be contacted at mleonard@gcn.com or follow him on Twitter @Matt_Lnrd.

Click here for previous articles by Leonard.


inside gcn

  • When cybersecurity capabilities are paid for, but untapped

Reader Comments

Fri, May 25, 2018 DrK

Excuse me, but isn't this what testing is for? It seems to me that with all the software/systems sophistication comprehensive automated testing should be the norm. Seems like getting ready for release and sale quickly often precludes adequate testing.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group