Skip to content

Correction History & Audit

Correction history provides a complete audit trail of all corrections: what was corrected, who approved it, when it deployed, and whether it worked. Use for compliance reporting, debugging, and continuous improvement.

Accessing Correction History

Path 1: From Alert

  1. Open alert
  2. Scroll to “Correction History” section
  3. See all corrections applied to this fact

Path 2: From Dashboard

  1. Click ShieldCorrectionsHistory
  2. View all corrections across organization
  3. Filter by date, fact, status, team member

Correction Record

Each correction includes:

Metadata

  • Correction ID: unique identifier (cor_abc123)
  • Fact corrected: which Truth Nugget was this about?
  • Hallucination details: what AI said vs. what it should have said
  • Severity: Critical/High/Medium/Low
  • Confidence: how certain was Shield about the hallucination? (0-100%)

Timeline

  • Detected: When hallucination detected (timestamp)
  • Approved: When fact owner approved correction (timestamp + approver name)
  • Deployed: When correction deployed to AI systems (timestamp)
  • Verified: When verification completed (timestamp + outcome)
  • Resolved: When alert marked as resolved (timestamp + status)

Example timeline:

2024-03-15 14:30 — Hallucination detected
2024-03-15 14:45 — Correction generated (Neural Fact Sheet)
2024-03-15 15:02 — Approved by: Sarah Chen (Fact Owner)
2024-03-15 15:08 — Deployed to production
2024-03-17 09:00 — Verification run completed
2024-03-17 09:15 — Verified successful
2024-03-17 09:15 — Alert resolved: Correction Verified

Approval Chain

  • Approver: Who approved the correction?
  • Reason: Why did they approve / request changes / dismiss?
  • Modifications: If approver requested changes, what changed?
  • Auto-approval: Was this auto-approved or manually reviewed? (if auto-approved, shows the rule that triggered it)

Example:

Approver: Sarah Chen (Product Manager)
Approval Type: Manual
Approval Time: 2024-03-15 15:02
Reason: "Verified against latest pricing page; matches exactly"
Modifications: None (approved as-is)

Correction Details

Original AI Response:

"TruthVouch's Standard plan costs $500/month"

Truth Nugget (Ground Truth):

STATEMENT: Standard plan costs $500/month
CONTEXT: Effective Q1 2024; includes up to 5M cross-checks
SOURCE: Pricing page (pricing.truthvouch.com)
CONFIDENCE: High

Generated Correction (Neural Fact Sheet):

FACT: Pricing - Standard Plan
STATEMENT: TruthVouch Standard plan costs $500 per month
CONTEXT: Standard plan includes up to 5M cross-checks and 3 Truth Nuggets
SOURCE: pricing.truthvouch.com (updated Q1 2024)
CONFIDENCE: High
EXAMPLES:
- "$500 per month for the Standard plan"
- "Standard pricing is $500/month"
- "The Standard tier of TruthVouch costs $500 monthly"

Deployment Details

  • Deployment method: Neural Fact Sheet / Direct Correction / Prompt Engineering
  • Target systems: Which AI engines were affected? (ChatGPT, Claude, Gemini, etc.)
  • Deployment status: Successful / Failed (and why if failed)
  • Rollback info: Was this correction later rolled back? (timestamp + reason)

Example:

Deployment Method: Neural Fact Sheet
Targets: All monitored AI engines (OpenAI GPT-4, Anthropic Claude, Google Gemini)
Status: Successful
Deployed to Vector DB: 2024-03-15 15:08
Availability: All queries from 2024-03-15 15:08 onward

Verification Results

Verification status: Verified / Partially Verified / Unverified / Rolled Back

Verification details:

  • Verification date/time
  • Re-query prompt used
  • AI response received
  • Confidence in success (0-100%)
  • Notes from verification system

Example (Successful):

Verification Status: Verified
Verification Date: 2024-03-17 09:00
Re-Query Prompt: "What does TruthVouch Standard plan cost?"
AI Response: "The Standard plan costs $500 per month"
Confidence: 96% (matches fact sheet)
Result: Successful — AI now correct

Example (Unverified):

Verification Status: Unverified
Verification Date: 2024-03-17 09:00
Re-Query Prompt: "What does TruthVouch Standard plan cost?"
AI Response: "The Standard plan costs between $499-$501"
Confidence: 45% (partially matches; vague range)
Result: Requires investigation — fact sheet wording unclear?

Filters

By Status:

  • Pending Approval (awaiting review)
  • Approved (awaiting deployment)
  • Deployed (live; awaiting verification)
  • Verified (successful)
  • Ineffective (correction failed; needs escalation)
  • Rolled Back (correction was reverted)

By Date:

  • Last 7 days
  • Last 30 days
  • Last 90 days
  • Custom range

By Fact:

  • Search by fact name or category
  • Filter by fact category (Financial, Product, Leadership, etc.)

By Team/Owner:

  • Corrections approved by specific person
  • Corrections for facts owned by specific person

By Severity:

  • Critical / High / Medium / Low

By AI Engine:

  • Which AI engines were affected? (ChatGPT, Claude, Gemini, etc.)

By Deployment Method:

  • Neural Fact Sheet / Direct Correction / Prompt Engineering

Full-text search across all corrections:

  • Search by hallucination text (“$50M” finds pricing hallucinations)
  • Search by fact name (“pricing”, “founding date”)
  • Search by approver name (“sarah chen”)
  • Search by fact category (“financial”)

Analytics & Reporting

Correction Metrics

Volume:

  • Corrections per day/week/month
  • Trend: Are you correcting more or fewer hallucinations over time?
  • Breakdown by severity (how many Critical vs. Low?)

Effectiveness:

  • % of corrections verified successful (target: >90%)
  • Average time to verification
  • % requiring rollback (target: <1%)
  • % requiring re-correction (same fact hallucinated multiple times)

Efficiency:

  • Avg time from detection to approval
  • Avg time from approval to deployment
  • Avg time from deployment to verification
  • Bulk correction usage (% of corrections that were bulk approvals)

By Fact Category:

  • Which categories have most hallucinations? (Financial? Product?)
  • Which categories have lowest verification rate? (needs investigation)
  • Opportunity: Improve fact sheets or prompt strategies for low-performing categories

By Team/Owner:

  • Corrections per fact owner
  • Approval speed per person (fastest/slowest approvers)
  • Verification rate per owner (are some fact owners’ corrections more effective than others?)

Sample Report

Corrections Summary (Last 30 Days)
═════════════════════════════════
Total Corrections: 47
Breakdown by Severity:
Critical: 3 (6%)
High: 12 (26%)
Medium: 24 (51%)
Low: 8 (17%)
Effectiveness:
Verified: 44 (94%)
Partially Verified: 2 (4%)
Unverified: 1 (2%)
Timeline (Median):
Approval Time: 8 minutes
Deployment Time: 15 seconds
Verification Time: 36 hours
Total (Detection to Resolved): 38 hours
Top Issues (Most Frequently Corrected):
1. "Product pricing" — 8 corrections
2. "Employee count" — 5 corrections
3. "Founding date" — 4 corrections
4. "Office location" — 3 corrections
Recommendations:
- Update pricing fact sheet (most hallucinations)
- Consider stronger embedding for "employee count"
- Review product description clarity

Audit Trail for Compliance

What’s Logged

Complete audit trail of every correction for compliance reviews:

  • Who approved each correction (name + role)
  • When it was approved (date/time)
  • What was corrected (original AI response, correct fact, correction details)
  • Why it was approved (approver’s reason/notes)
  • Where it was deployed (which AI engines)
  • Result (verification outcome)

Access Control

  • Audit logs visible to: Governance, Compliance, Auditors (configurable)
  • Cannot be deleted — Immutable audit trail
  • Timestamped — All entries include UTC timestamp
  • Digitally signed — Can be verified as unaltered

Use Cases

Compliance Audit: “Show me all corrections to financial facts in Q1 2024”

  • Filter by fact category (Financial) + date range
  • Export CSV for auditor review
  • Auditor sees approval chain, verification results, deployment info

Regulatory Investigation: “Were we aware of the pricing hallucination?”

  • Search for corrections to pricing fact
  • Audit trail shows when detected, who approved, when deployed
  • Evidence you took action to correct misinformation

Post-Incident Review: “How did we handle the data breach claim?”

  • Search for corrections to that specific claim
  • See full timeline, who was involved, what happened
  • Learn from incident for future improvements

Exporting History

Export Options

By Correction:

  • Click individual correction → Click Export
  • Formats: PDF (printable), JSON (programmatic)

Bulk Export:

  1. Filter corrections (date range, fact category, etc.)
  2. Click Export All
  3. Select format: CSV (spreadsheet), PDF (report), JSON (data)
  4. Choose details to include (optional columns)

Scheduled Reports:

  • Create recurring export (daily/weekly/monthly)
  • Receive as email attachment or webhook
  • Keep long-term archive for compliance

Next Steps

  1. Review correction history — Look at recent corrections in your org
  2. Check verification rates — Are corrections working? (target: >90%)
  3. Identify repeat issues — Which facts keep getting hallucinated?
  4. Improve fact sheets — For low-verification-rate facts
  5. Set up audit exports — For compliance reporting