Skip to content

CI/CD Truth Gate

Prevent unreliable AI-generated content from reaching production. TruthVouch Truth Gate integrates into CI/CD pipelines to block deployments when AI outputs fail verification, ensuring accuracy standards are met before code goes live.

What is a Truth Gate?

A Truth Gate is an automated quality check in your CI/CD pipeline:

Git Push
Code Review
[Truth Gate] ← Check for AI-generated content
├─ Generates output samples
├─ Verifies facts against knowledge base
├─ Checks against hallucination patterns
├─ Validates citations
└─ PASS/FAIL decision
Build/Deploy

Truth Gates are ideal for:

  • Generated documentation (READMEs, API docs)
  • AI-assisted content (blog posts, marketing copy)
  • Synthetic test data
  • Generated code comments and docstrings

GitHub Actions Implementation

Basic Truth Gate Workflow

name: Truth Gate
on:
pull_request:
paths:
- 'docs/**'
- 'content/**'
jobs:
truth-gate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Run Truth Gate
uses: truthvouch/truth-gate-action@v1
with:
api-key: ${{ secrets.TRUTHVOUCH_API_KEY }}
paths: docs/**,content/**
min-confidence: 0.85
allow-warnings: false
- name: Comment on PR
if: failure()
uses: actions/github-script@v7
with:
github-token: ${{ github.token }}
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: '❌ Truth Gate failed: AI-generated content did not meet accuracy standards.'
})

Advanced Configuration

name: Truth Gate - Advanced
on:
pull_request:
types: [opened, synchronize]
jobs:
truth-gate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Extract AI-Generated Content
id: extract
run: |
# Find files with AI generation markers
grep -r "<!-- AI_GENERATED -->" docs/ || true
echo "ai_files=$(find docs -name '*.md' -exec grep -l 'AI_GENERATED' {} \;)" >> $GITHUB_OUTPUT
- name: Truth Gate - Documentation
if: steps.extract.outputs.ai_files != ''
uses: truthvouch/truth-gate-action@v1
with:
api-key: ${{ secrets.TRUTHVOUCH_API_KEY }}
paths: ${{ steps.extract.outputs.ai_files }}
knowledge-base-id: ${{ secrets.KB_DOCS }}
min-confidence: 0.9
fail-on-warnings: true
report-format: json
report-path: truth-gate-report.json
- name: Check for Critical Issues
run: |
CRITICAL=$(jq '.issues[] | select(.severity == "critical") | length' truth-gate-report.json)
if [ "$CRITICAL" -gt 0 ]; then
echo "Critical accuracy issues found: $CRITICAL"
exit 1
fi
- name: Upload Report
if: always()
uses: actions/upload-artifact@v3
with:
name: truth-gate-report
path: truth-gate-report.json

GitLab CI Implementation

truth-gate:
stage: test
image: truthvouch/truth-gate-cli:latest
script:
- |
truth-gate verify \
--api-key $TRUTHVOUCH_API_KEY \
--paths docs/**,content/** \
--knowledge-base-id $KB_ID \
--min-confidence 0.85 \
--output json \
--report truth-gate-report.json
# Fail on critical issues
- |
CRITICAL=$(jq '.summary.critical_issues' truth-gate-report.json)
if [ "$CRITICAL" -gt 0 ]; then
echo "❌ Critical accuracy issues: $CRITICAL"
exit 1
fi
WARNING=$(jq '.summary.warning_issues' truth-gate-report.json)
if [ "$WARNING" -gt 5 ]; then
echo "⚠️ Too many warnings: $WARNING (max: 5)"
exit 1
fi
artifacts:
reports:
junit: truth-gate-report.json
paths:
- truth-gate-report.json

Programmatic Integration

Python: Custom Verification Script

#!/usr/bin/env python3
"""Truth Gate for AI-generated content in PR."""
import os
import sys
import json
from pathlib import Path
from truthvouch.shield import VerificationClient
def extract_ai_content(paths: list[str]) -> dict[str, str]:
"""Extract files marked as AI-generated."""
ai_files = {}
for path_pattern in paths:
for filepath in Path(".").glob(path_pattern):
if not filepath.is_file():
continue
with open(filepath) as f:
content = f.read()
# Check for AI generation markers
if "AI_GENERATED" in content or "auto-generated" in content:
ai_files[str(filepath)] = content
return ai_files
async def verify_all(files: dict[str, str], min_confidence: float = 0.85):
"""Verify all AI-generated content."""
client = VerificationClient(api_key=os.getenv("TRUTHVOUCH_API_KEY"))
results = {}
for filepath, content in files.items():
print(f"Verifying {filepath}...", end=" ", flush=True)
try:
result = await client.verify_fact(
text=content,
knowledge_base_id=os.getenv("KB_ID"),
check_citations=True
)
passed = result.confidence >= min_confidence
status = "✓ PASS" if passed else "✗ FAIL"
print(f"{status} (confidence: {result.confidence:.1%})")
results[filepath] = {
"confidence": result.confidence,
"passed": passed,
"issues": result.issues,
"missing_citations": result.missing_citations
}
except Exception as e:
print(f"✗ ERROR: {e}")
results[filepath] = {
"error": str(e),
"passed": False
}
return results
def report(results: dict) -> int:
"""Generate report and return exit code."""
passed = sum(1 for r in results.values() if r.get("passed"))
failed = len(results) - passed
print(f"\n{'='*60}")
print(f"Truth Gate Report: {passed} passed, {failed} failed")
print(f"{'='*60}\n")
for filepath, result in results.items():
if result.get("error"):
print(f"❌ {filepath}: {result['error']}")
elif result.get("passed"):
print(f"✓ {filepath}: {result['confidence']:.1%}")
else:
print(f"✗ {filepath}: {result['confidence']:.1%}")
for issue in result.get("issues", []):
print(f" - {issue['text']}: {issue['reason']}")
return 0 if failed == 0 else 1
if __name__ == "__main__":
import asyncio
paths = ["docs/**/*.md", "content/**/*.md"]
ai_files = extract_ai_content(paths)
if not ai_files:
print("No AI-generated content found")
sys.exit(0)
print(f"Found {len(ai_files)} AI-generated files\n")
results = asyncio.run(verify_all(ai_files, min_confidence=0.85))
sys.exit(report(results))

Custom Approval Workflows

Confidence-Based Promotion

truth-gate-custom:
stage: test
script:
- truth-gate verify --output json > report.json
# Tier 1: Auto-merge if very high confidence (>95%)
- |
if jq -e '.summary.overall_confidence > 0.95' report.json; then
echo "✓ Auto-approved: High confidence"
exit 0
fi
# Tier 2: Require manual review if medium confidence (80-95%)
- |
if jq -e '.summary.overall_confidence > 0.80' report.json; then
echo "⚠️ Manual review required"
exit 1 # Block deploy, awaiting human approval
fi
# Tier 3: Auto-reject if low confidence (<80%)
- |
echo "✗ Rejected: Low confidence"
exit 1

Team Approval Integration

"""Post truth-gate results to Slack for human review."""
import json
import os
from slack_sdk import WebClient
def notify_review_team(report_path: str):
slack = WebClient(token=os.getenv("SLACK_BOT_TOKEN"))
with open(report_path) as f:
report = json.load(f)
summary = report["summary"]
confidence = summary["overall_confidence"]
if confidence > 0.95:
emoji = ""
color = "good"
elif confidence > 0.80:
emoji = "⚠️"
color = "warning"
else:
emoji = ""
color = "danger"
slack.chat_postMessage(
channel="content-approval",
blocks=[
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"{emoji} Truth Gate Report\n*Confidence:* {confidence:.1%}\n*Issues:* {summary['total_issues']}"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"<{os.getenv('CI_PIPELINE_URL')}|View Full Report>"
}
}
]
)

Best Practices

Workflow Design

  • Gate documentation and content (80% of hallucination risk)
  • Skip generated code (use linters/tests instead)
  • Use 90% confidence for critical content (legal, medical)
  • Use 80% confidence for general content

Quality Control

  • Store all reports in artifact storage
  • Alert on sudden confidence drops
  • Review failures in weekly reports
  • Track improvements over time

Cost Management

  • Run Truth Gate only on modified files (not whole repo)
  • Batch verifications for large document sets
  • Use caching layer for identical content
  • Monitor per-pipeline costs

Troubleshooting

Q: Pipeline times out on large content

  • Split files into smaller chunks (500-1000 words)
  • Verify only changed files (use git diff)
  • Use batch API for parallel verification

Q: Too many false positives

  • Increase confidence threshold to 0.85-0.90
  • Add domain-specific knowledge base
  • Review and adjust rules weekly

Q: Verification cost is high

  • Cache results for unchanged files
  • Run Truth Gate only on PR stage
  • Use sampling for large content sets

Next Steps