Skip to content

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF) is the US government’s strategic framework for managing AI risks. While not a law, it’s increasingly required by US government vendors and recommended for federal contractors. It provides four functions (Govern, Map, Measure, Manage) that help organizations establish AI risk governance. TruthVouch automates mapping AI systems to NIST functions and tracks implementation.

What Is NIST AI RMF?

NIST AI RMF (published Sep 2023) is a voluntary framework helping organizations manage risks from AI systems. It’s strategic rather than prescriptive — it defines what risks to consider, not how to implement controls.

4 Functions:

FunctionPurposeWhen
GovernEstablish AI risk governance — roles, policies, integration with businessBefore deploying AI
MapMap AI system characteristics, capabilities, risks to NIST frameworkDuring AI development
MeasureMeasure AI system performance, safety, fairness, security, privacyContinuous through lifecycle
ManageManage identified risks through mitigation, monitoring, responseThroughout AI lifecycle

NIST Functions Explained

Function 1: Govern

Establish organizational AI governance — leadership commitment, policies, roles, accountability.

What to do:

  1. Define who’s responsible for AI governance (Chief AI Officer, AI governance board, etc.)
  2. Create AI governance policy covering:
    • AI system lifecycle management
    • Risk management integration
    • Roles and responsibilities
    • Resource allocation
    • Vendor management
  3. Document AI principles (transparency, fairness, accountability, etc.)
  4. Integrate AI governance with business strategy
  5. Plan training and awareness for staff

Compliance AI support:

  • Governance policy templates
  • Governance assessment questionnaire
  • Role and responsibility matrix
  • Training program builder

Typical governance artifacts:

  • AI governance policy (1-2 pages)
  • AI governance board charter
  • Role descriptions (Chief AI Officer, Model Owner, Data Steward, etc.)
  • Annual AI governance plan
  • Governance effectiveness report

Function 2: Map

Map each AI system’s characteristics, capabilities, and risks to NIST categories.

What to map:

  1. System characteristics:

    • What does it do? (generative AI, recommender, classifier, etc.)
    • What data does it use? (customer, operational, proprietary)
    • Who does it affect? (employees, customers, public)
  2. Capabilities:

    • Performance metrics (accuracy, latency, throughput)
    • Explainability (can users understand decisions?)
    • Adaptability (can it be updated/retrained?)
  3. Risks: Map to NIST risk categories:

    • Fairness & Bias — Discriminatory outcomes for protected groups
    • Transparency & Explainability — Can affected individuals understand decisions?
    • Robustness & Security — Resilience to adversarial attacks or data poisoning?
    • Privacy & Data Protection — Training data or model membership inference attacks?
    • Accountability — Clear accountability for AI decisions?
    • Societal & Environmental — Broader impacts (labor displacement, environmental)?

Compliance AI support:

  • AI system profile form (system type, data, users, risks)
  • Auto-classification of system into risk categories (similar to EU AI Act)
  • Risk mapping questionnaire
  • Artifacts export

Typical mapping artifacts:

  • AI system profile (1 page per system)
  • Risk matrix (risks vs. systems)
  • Data processing diagram (where data comes from, how it flows)
  • RACI matrix (who’s responsible for risk management)

Function 3: Measure

Measure AI system performance, safety, fairness, security, and privacy.

What to measure:

DimensionMetricsTools
PerformanceAccuracy, Precision, Recall, F1 Score, ROC-AUC, Latency, ThroughputModel testing, performance dashboards
FairnessDisparate impact, Demographic parity, Equalized odds, Predictive parityBias testing frameworks (AI Fairness 360, Fairlearn)
RobustnessAdversarial testing, Data poisoning resistance, Out-of-distribution detectionAdversarial attack frameworks (Foolbox, Adversarial Robustness Toolbox)
SecurityAccess control audit, Data encryption, Model extraction riskSecurity scanning, penetration testing
PrivacyMembership inference risk, Model inversion risk, Differential privacyPrivacy auditing tools
ExplainabilityFeature importance (SHAP, LIME), Counterfactuals, Rule-based explanationsInterpretability libraries

Compliance AI support:

  • Performance dashboard (accuracy, latency, throughput)
  • Fairness testing (bias gaps across demographics)
  • Robustness testing (adversarial examples)
  • Privacy assessment
  • Explainability audit

Typical measurement artifacts:

  • Performance report (baseline metrics)
  • Fairness assessment (demographic parity, disparate impact)
  • Robustness report (adversarial testing results)
  • Privacy risk assessment
  • Explainability methodology document

Function 4: Manage

Manage identified risks through mitigation, monitoring, incident response.

What to do:

  1. Mitigation: For each risk identified in Map function:

    • Mitigate (reduce risk through controls)
    • Monitor (observe risk continuously)
    • Accept (document decision to accept risk)
  2. Continuous monitoring:

    • Track performance metrics over time
    • Alert if performance degrades
    • Monitor fairness metrics for drift
    • Check access logs for suspicious activity
  3. Incident response:

    • If risk materializes (model fails, bias detected, security breach)
    • Investigate root cause
    • Take corrective action
    • Update risk register
    • Communicate to stakeholders
  4. Governance integration:

    • Regular reporting to governance board
    • Update risk register quarterly
    • Adjust mitigation strategies as needed
    • Plan model retraining/updates

Compliance AI support:

  • Risk mitigation playbooks
  • Continuous monitoring dashboards
  • Incident management workflow
  • Risk register tracking
  • Governance reporting

Typical management artifacts:

  • Risk register (risks, likelihood, impact, mitigation)
  • Monitoring configuration (what metrics, thresholds, alerting)
  • Incident response procedures
  • Quarterly risk report
  • Board-level governance report

NIST AI RMF Maturity Levels

NIST defines maturity levels for each function:

LevelGovernMapMeasureManage
InitialAd hoc governanceSystems not formally mappedMinimal measurementReactive incident response
RepeatableDocumented policiesSystems mapped to categoriesBaseline metrics collectedStructured mitigation process
DefinedIntegrated governanceComprehensive risk assessmentsContinuous monitoringProactive risk management
OptimizedStrategic AI governanceDynamic risk assessmentAdvanced analytics (ML-based drift detection)Predictive risk management

Your goal: Target “Defined” maturity for compliance. “Optimized” requires advanced tooling.

NIST AI RMF vs. Other Frameworks

NIST AI RMF vs. ISO 42001

AspectNIST AI RMFISO 42001
ScopeStrategic AI risk frameworkOperational AI management system
ApproachFunctions and outcomes (what to achieve)Controls and processes (how to implement)
CertificationSelf-assessment, no certificationThird-party certification available
AdoptionUS government, enterprisesGrowing globally

Best practice: Use NIST AI RMF for strategy, ISO 42001 for operations. Many organizations map NIST functions to ISO 42001 controls.

NIST AI RMF vs. EU AI Act

AspectNIST AI RMFEU AI Act
TypeStrategic frameworkLegal regulation
RequirementsGuidanceMandatory if in EU
TimelineIndefiniteEffective Aug 2026
ScopeAll AI risksHigh-risk AI systems

If you have EU users, you must comply with EU AI Act. NIST AI RMF is best practice in parallel.

Compliance Roadmap

Step 1: Assess Current State (1-2 weeks)

  1. Go to Compliance > Frameworks > NIST AI RMF > Assessment

  2. Answer questions about:

    • Current AI governance maturity
    • Which AI systems you have
    • Current risk management practices
    • Measurement capabilities
    • Incident response procedures
  3. Compliance AI generates maturity report:

    • Current maturity level per function
    • Gaps vs. target maturity (Defined)
    • Recommended improvements

Step 2: Build Governance (1 month)

  1. Create AI governance policy
  2. Establish AI governance board
  3. Assign roles and responsibilities
  4. Plan AI risk management process

Compliance AI support:

  • Governance policy template
  • Board charter template
  • RACI matrix builder

Step 3: Map AI Systems (2-4 weeks)

  1. For each AI system, document:

    • System characteristics (type, data, scope)
    • Capabilities (performance, explainability)
    • Risks (fairness, robustness, privacy, security)
  2. Create risk register

Compliance AI support:

  • System profile questionnaire
  • Risk matrix visualization
  • Risk register tracking

Step 4: Establish Measurement (4-8 weeks)

  1. Define baseline metrics for each AI system
  2. Set up dashboards
  3. Implement continuous monitoring
  4. Establish alerting thresholds

Compliance AI support:

  • Performance dashboard
  • Fairness testing
  • Robustness assessment
  • Privacy audit

Step 5: Implement Management (ongoing)

  1. Create risk mitigation plans
  2. Set up incident response procedures
  3. Create governance reporting

Compliance AI support:

  • Risk management playbooks
  • Incident management workflow
  • Quarterly governance reports

Typical Implementation Timeline

FunctionDurationEffort
Govern1-2 months40-80 hours
Map2-4 weeks per system10-20 hours per system
Measure4-8 weeks60-120 hours
ManageOngoing10-20 hours/month
Total (initial)2-4 months150-300 hours

Next Steps