NIST.AI.600-1

NIST Trustworthy and Responsible AI Overview

  • Title: NIST AI 600-1 Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

  • Availability: Free publication at NIST AI 600-1

  • Release Date: July 2024

  • Published by: United States Department of Commerce

About AI at NIST

  • Purpose of NIST:

    • Develops measurements, technology, tools, and standards for reliable, safe, and fair AI.

    • Aims for transparent AI that maximizes commercial and societal benefits while minimizing harm.

    • Involved in fulfilling the 2023 Executive Order on Safe, Secure, and Trustworthy AI.

  • Initiatives:

    • Established the U.S. AI Safety Institute and Consortium to enhance safe AI usage.

    • Community contributions were essential in creating this framework, with acknowledgment to NIST staff and guest researchers.

Document Structure

  • Table of Contents:

      1. Introduction

      1. Overview of Risks Unique to or Exacerbated by GAI

      1. Suggested Actions to Manage GAI Risks

    • Appendix A. Primary GAI Considerations

    • Appendix B. References

Introduction to GAI and Risk Management

  • Context:

    • GAI includes AI models generating content based on input data.

  • Framework Implementation:

    • Aligns with President Biden’s Executive Order 14110 on AI.

    • Provides cross-sectoral guidelines for managing GAI risks based on organizational goals and resources.

  • GAI Defined:

    • Models that mimic input data structures to create synthetic outputs, where risks vary by lifecycle stage, ecosystem, and sources.

Risks Unique to GAI

  • Risk Dimensions:

    • Stage of AI lifecycle: Design, development, deployment, operation, decommissioning.

    • Scope: Risks can be model-specific, application-centered, or relate to broader ecosystem impacts.

  • Unique Risks Include:

    • Misuse of generated content, data privacy breaches, harmful biases, environmental impacts, and information integrity challenges.

Suggested Actions to Manage GAI Risks

  • Framework for Action:

    • GAI risk management actions categorized under applicable AI RMF functions.

  • Governance Actions:

    • Align development with laws, enhance transparency in data usage, evaluate risk capabilities, and establish thresholds for performance requirements.

    • Include public feedback and establish channels for incident reporting alongside continuous monitoring practices.

Primary Considerations in GAI Guidance

  • Governance:

    • Emphasizes collaborative decision-making to handle risks in GAI frameworks.

  • Pre-Deployment Testing:

    • Stress test systems against a variety of outputs to ascertain risks before public deployment.

Conclusion

  • Importance of Robust AI Governance:

    • Encourages responsible practices in GAI development to enhance safety and public trust.

    • Future revisions may expand on current findings and risks based on evolving insights.