• Content Type

NIST Artifical Intelligence Risk Management Framework

Last updated: 7 Jan 2025

Development Stage

Pre-draft

Draft

Published

29 Sep 2022
26 Jan 2023

Abstract

In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process that included a Request for Information, several draft versions for public comments, multiple workshops, and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.

A companion NIST AI RMF Playbook also has been published by NIST along with an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives. In addition, NIST is making available a video explainer about the AI RMF.

To view public comments received on the previous drafts of the AI RMF and Requests for Information, see the AI RMF Development page. NIST Standard Reference Data (SRD); ©Copyright 2023 by the U.S. Secretary of Commerce on behalf of the United States of America. All rights reserved.

[site_reviews_summary assigned_posts=”post_id” hide=”bars,if_empty” text=”{rating} out of {max} stars ({num} reviews)”]

Let the community know

Categorisation

Domain: Horizontal

Key Information

Discussion

  • Author
    Posts
  • Up
    0
    ::

    Share your thoughts on this standard here.

You must be logged in to contribute to the discussion

Login

[check_original_title]