NIST seeks input on AI risk management framework
- By Shourjya Mookerjee
- Jul 28, 2021
The National Institute of Standards and Technology is seeking comments on developing an Artificial Intelligence Risk Management Framework (AI RMF) that would improve organizations’ ability to incorporate trustworthiness into the design, development and use of AI systems.
"The Framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses," NIST wrote in a July 28 request for information published in the Federal Register.
NIST wants input on how the framework should address challenges in AI risk management, including identification, assessment, prioritization, response and communication of AI risks; how organizations currently assess and manage AI risk, including bias and harmful outcomes; and how AI can be developed so that it lessens the potential negative impact on individuals and society, the RFI said.
Suggestions on what common definitions and characterizations for the aspects of trustworthiness should be submitted as well as best practices that might align with an AI risk framework.
NIST plans to develop its AI-RMF with the same practices it used for the widely embraced 2014 Cybersecurity Framework and the 2020 Privacy Framework.
Responses are due Aug. 19. Read the full RFI here.
Shourjya Mookerjee is an associate editor for GCN and FCW. He is a graduate of the University of Maryland, College Park, and has written for Vox Media, Fandom and a number of capital-area news outlets. He can be reached at [email protected] – or you can find him ranting about sports, cinematography and the importance of local journalism on Twitter @byShourjya.