decision support (kentoh/


Assessing the impact of algorithms

What: “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” a report by the AI Now Institute at New York University, which is focused on the social implications of AI

Why: As public agencies increasingly turn to automated processes and algorithms to make decisions, they need frameworks for accountability that can address inevitable questions – from software bias to the system's impact on the community. The AI Now Institute's Algorithmic Impact Assessment gives public agencies a practical way to assess automated decision systems and to ensure public accountability.

Proposal: Just as an environmental impact statement can increase agencies' sensitivity to environmental values and effectively inform the public of coming changes, an AIA aims to do the same for algorithms before governments put them to use.  The process starts with a pre-acquisition review in which an agency, other public officials and the public at large are given a chance to review the proposed technology before the agency enters into any formal agreements. Part of this process would include defining what the agency considers an “automated decision system,” disclosing details about the technology and its use, evaluating the potential for bias and inaccuracy as well as planning for third-party researchers to study the system after it becomes operational.

Public comment should be solicited before any AI-enabled systems begin operation, AI Now suggests. In addition, a due process period would allow outside groups or individuals to challenge an agency on its compliance with an AIA. Once an automated decision system is deployed, AI Now says the communities the system will affect should be notified.  

AIAs would help public agencies better understand the potential impacts before systems are implemented, encouraging them "to better manage their own technical systems and become leaders in the responsible integration of increasingly complex computational systems in governance." They also provide an opportunity for vendors to foster public trust in their systems.

These AIAs, once implemented, should be renewed on a regular basis, AI Now writes.

Read the full report here.

Editor's note: This article was changed April 18 to correct the name and affiliation of the AI Now Institute at NYU.

About the Author

Matt Leonard is a former reporter for GCN.


  • Records management: Look beyond the NARA mandates

    Records management is about to get harder

    New collaboration technologies ramped up in the wake of the pandemic have introduced some new challenges.

  • puzzled employee (fizkes/

    Phish Scale: Weighing the threat from email scammers

    The National Institute of Standards and Technology’s Phish Scale quantifies characteristics of phishing emails that are likely to trick users.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.