NIST AI Risk Management Framework
The NIST AI Risk Management Framework (AI RMF) is the US government’s strategic framework for managing AI risks. While not a law, it’s increasingly required by US government vendors and recommended for federal contractors. It provides four functions (Govern, Map, Measure, Manage) that help organizations establish AI risk governance. TruthVouch automates mapping AI systems to NIST functions and tracks implementation.
What Is NIST AI RMF?
NIST AI RMF (published Sep 2023) is a voluntary framework helping organizations manage risks from AI systems. It’s strategic rather than prescriptive — it defines what risks to consider, not how to implement controls.
4 Functions:
| Function | Purpose | When |
|---|---|---|
| Govern | Establish AI risk governance — roles, policies, integration with business | Before deploying AI |
| Map | Map AI system characteristics, capabilities, risks to NIST framework | During AI development |
| Measure | Measure AI system performance, safety, fairness, security, privacy | Continuous through lifecycle |
| Manage | Manage identified risks through mitigation, monitoring, response | Throughout AI lifecycle |
NIST Functions Explained
Function 1: Govern
Establish organizational AI governance — leadership commitment, policies, roles, accountability.
What to do:
- Define who’s responsible for AI governance (Chief AI Officer, AI governance board, etc.)
- Create AI governance policy covering:
- AI system lifecycle management
- Risk management integration
- Roles and responsibilities
- Resource allocation
- Vendor management
- Document AI principles (transparency, fairness, accountability, etc.)
- Integrate AI governance with business strategy
- Plan training and awareness for staff
Compliance AI support:
- Governance policy templates
- Governance assessment questionnaire
- Role and responsibility matrix
- Training program builder
Typical governance artifacts:
- AI governance policy (1-2 pages)
- AI governance board charter
- Role descriptions (Chief AI Officer, Model Owner, Data Steward, etc.)
- Annual AI governance plan
- Governance effectiveness report
Function 2: Map
Map each AI system’s characteristics, capabilities, and risks to NIST categories.
What to map:
-
System characteristics:
- What does it do? (generative AI, recommender, classifier, etc.)
- What data does it use? (customer, operational, proprietary)
- Who does it affect? (employees, customers, public)
-
Capabilities:
- Performance metrics (accuracy, latency, throughput)
- Explainability (can users understand decisions?)
- Adaptability (can it be updated/retrained?)
-
Risks: Map to NIST risk categories:
- Fairness & Bias — Discriminatory outcomes for protected groups
- Transparency & Explainability — Can affected individuals understand decisions?
- Robustness & Security — Resilience to adversarial attacks or data poisoning?
- Privacy & Data Protection — Training data or model membership inference attacks?
- Accountability — Clear accountability for AI decisions?
- Societal & Environmental — Broader impacts (labor displacement, environmental)?
Compliance AI support:
- AI system profile form (system type, data, users, risks)
- Auto-classification of system into risk categories (similar to EU AI Act)
- Risk mapping questionnaire
- Artifacts export
Typical mapping artifacts:
- AI system profile (1 page per system)
- Risk matrix (risks vs. systems)
- Data processing diagram (where data comes from, how it flows)
- RACI matrix (who’s responsible for risk management)
Function 3: Measure
Measure AI system performance, safety, fairness, security, and privacy.
What to measure:
| Dimension | Metrics | Tools |
|---|---|---|
| Performance | Accuracy, Precision, Recall, F1 Score, ROC-AUC, Latency, Throughput | Model testing, performance dashboards |
| Fairness | Disparate impact, Demographic parity, Equalized odds, Predictive parity | Bias testing frameworks (AI Fairness 360, Fairlearn) |
| Robustness | Adversarial testing, Data poisoning resistance, Out-of-distribution detection | Adversarial attack frameworks (Foolbox, Adversarial Robustness Toolbox) |
| Security | Access control audit, Data encryption, Model extraction risk | Security scanning, penetration testing |
| Privacy | Membership inference risk, Model inversion risk, Differential privacy | Privacy auditing tools |
| Explainability | Feature importance (SHAP, LIME), Counterfactuals, Rule-based explanations | Interpretability libraries |
Compliance AI support:
- Performance dashboard (accuracy, latency, throughput)
- Fairness testing (bias gaps across demographics)
- Robustness testing (adversarial examples)
- Privacy assessment
- Explainability audit
Typical measurement artifacts:
- Performance report (baseline metrics)
- Fairness assessment (demographic parity, disparate impact)
- Robustness report (adversarial testing results)
- Privacy risk assessment
- Explainability methodology document
Function 4: Manage
Manage identified risks through mitigation, monitoring, incident response.
What to do:
-
Mitigation: For each risk identified in Map function:
- Mitigate (reduce risk through controls)
- Monitor (observe risk continuously)
- Accept (document decision to accept risk)
-
Continuous monitoring:
- Track performance metrics over time
- Alert if performance degrades
- Monitor fairness metrics for drift
- Check access logs for suspicious activity
-
Incident response:
- If risk materializes (model fails, bias detected, security breach)
- Investigate root cause
- Take corrective action
- Update risk register
- Communicate to stakeholders
-
Governance integration:
- Regular reporting to governance board
- Update risk register quarterly
- Adjust mitigation strategies as needed
- Plan model retraining/updates
Compliance AI support:
- Risk mitigation playbooks
- Continuous monitoring dashboards
- Incident management workflow
- Risk register tracking
- Governance reporting
Typical management artifacts:
- Risk register (risks, likelihood, impact, mitigation)
- Monitoring configuration (what metrics, thresholds, alerting)
- Incident response procedures
- Quarterly risk report
- Board-level governance report
NIST AI RMF Maturity Levels
NIST defines maturity levels for each function:
| Level | Govern | Map | Measure | Manage |
|---|---|---|---|---|
| Initial | Ad hoc governance | Systems not formally mapped | Minimal measurement | Reactive incident response |
| Repeatable | Documented policies | Systems mapped to categories | Baseline metrics collected | Structured mitigation process |
| Defined | Integrated governance | Comprehensive risk assessments | Continuous monitoring | Proactive risk management |
| Optimized | Strategic AI governance | Dynamic risk assessment | Advanced analytics (ML-based drift detection) | Predictive risk management |
Your goal: Target “Defined” maturity for compliance. “Optimized” requires advanced tooling.
NIST AI RMF vs. Other Frameworks
NIST AI RMF vs. ISO 42001
| Aspect | NIST AI RMF | ISO 42001 |
|---|---|---|
| Scope | Strategic AI risk framework | Operational AI management system |
| Approach | Functions and outcomes (what to achieve) | Controls and processes (how to implement) |
| Certification | Self-assessment, no certification | Third-party certification available |
| Adoption | US government, enterprises | Growing globally |
Best practice: Use NIST AI RMF for strategy, ISO 42001 for operations. Many organizations map NIST functions to ISO 42001 controls.
NIST AI RMF vs. EU AI Act
| Aspect | NIST AI RMF | EU AI Act |
|---|---|---|
| Type | Strategic framework | Legal regulation |
| Requirements | Guidance | Mandatory if in EU |
| Timeline | Indefinite | Effective Aug 2026 |
| Scope | All AI risks | High-risk AI systems |
If you have EU users, you must comply with EU AI Act. NIST AI RMF is best practice in parallel.
Compliance Roadmap
Step 1: Assess Current State (1-2 weeks)
-
Go to Compliance > Frameworks > NIST AI RMF > Assessment
-
Answer questions about:
- Current AI governance maturity
- Which AI systems you have
- Current risk management practices
- Measurement capabilities
- Incident response procedures
-
Compliance AI generates maturity report:
- Current maturity level per function
- Gaps vs. target maturity (Defined)
- Recommended improvements
Step 2: Build Governance (1 month)
- Create AI governance policy
- Establish AI governance board
- Assign roles and responsibilities
- Plan AI risk management process
Compliance AI support:
- Governance policy template
- Board charter template
- RACI matrix builder
Step 3: Map AI Systems (2-4 weeks)
-
For each AI system, document:
- System characteristics (type, data, scope)
- Capabilities (performance, explainability)
- Risks (fairness, robustness, privacy, security)
-
Create risk register
Compliance AI support:
- System profile questionnaire
- Risk matrix visualization
- Risk register tracking
Step 4: Establish Measurement (4-8 weeks)
- Define baseline metrics for each AI system
- Set up dashboards
- Implement continuous monitoring
- Establish alerting thresholds
Compliance AI support:
- Performance dashboard
- Fairness testing
- Robustness assessment
- Privacy audit
Step 5: Implement Management (ongoing)
- Create risk mitigation plans
- Set up incident response procedures
- Create governance reporting
Compliance AI support:
- Risk management playbooks
- Incident management workflow
- Quarterly governance reports
Typical Implementation Timeline
| Function | Duration | Effort |
|---|---|---|
| Govern | 1-2 months | 40-80 hours |
| Map | 2-4 weeks per system | 10-20 hours per system |
| Measure | 4-8 weeks | 60-120 hours |
| Manage | Ongoing | 10-20 hours/month |
| Total (initial) | 2-4 months | 150-300 hours |
Next Steps
- Start NIST AI RMF assessment: Go to Compliance > Frameworks > NIST AI RMF > Assessment
- Create AI governance policy: Policy & Control Management
- Map your AI systems: AI System Registry
- Set up monitoring: Evidence Connectors