decision support (kentoh/


Assessing the impact of algorithms

What: “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” a report by the AI Now Institute at New York University, which is focused on the social implications of AI

Why: As public agencies increasingly turn to automated processes and algorithms to make decisions, they need frameworks for accountability that can address inevitable questions – from software bias to the system's impact on the community. The AI Now Institute's Algorithmic Impact Assessment gives public agencies a practical way to assess automated decision systems and to ensure public accountability.

Proposal: Just as an environmental impact statement can increase agencies' sensitivity to environmental values and effectively inform the public of coming changes, an AIA aims to do the same for algorithms before governments put them to use.  The process starts with a pre-acquisition review in which an agency, other public officials and the public at large are given a chance to review the proposed technology before the agency enters into any formal agreements. Part of this process would include defining what the agency considers an “automated decision system,” disclosing details about the technology and its use, evaluating the potential for bias and inaccuracy as well as planning for third-party researchers to study the system after it becomes operational.

Public comment should be solicited before any AI-enabled systems begin operation, AI Now suggests. In addition, a due process period would allow outside groups or individuals to challenge an agency on its compliance with an AIA. Once an automated decision system is deployed, AI Now says the communities the system will affect should be notified.  

AIAs would help public agencies better understand the potential impacts before systems are implemented, encouraging them "to better manage their own technical systems and become leaders in the responsible integration of increasingly complex computational systems in governance." They also provide an opportunity for vendors to foster public trust in their systems.

These AIAs, once implemented, should be renewed on a regular basis, AI Now writes.

Read the full report here.

Editor's note: This article was changed April 18 to correct the name and affiliation of the AI Now Institute at NYU.

About the Author

Matt Leonard is a reporter/producer at GCN.

Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.

Leonard can be contacted at or follow him on Twitter @Matt_Lnrd.

Click here for previous articles by Leonard.

inside gcn

  • high performance computing (Gorodenkoff/

    Does AI require high-end infrastructure?

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group