Skip to content

Understanding Trust Score

Trust Score is a 0-100 metric that summarizes the factual accuracy of your certified content. This guide explains score calculation, what affects your score, and how to interpret results.

Trust Score Basics

Trust Score represents the confidence level that your content is factually accurate based on verification against your truth nuggets:

  • 0-20: Critical issues — multiple significant inaccuracies
  • 21-40: Caution — notable drift or unverified claims
  • 41-60: Mixed — some claims verified, others unverified or questionable
  • 61-80: Good — most claims verified with minor gaps
  • 81-100: Excellent — all or nearly all claims verified

Score Calculation

Trust Score is calculated in two steps:

Step 1: Per-Claim Scoring

Each extracted claim receives a score:

Claim Score = {
100 if exact match with truth nugget
80-100 if semantic match (high confidence)
50-79 if semantic match (low confidence)
50 if unverified (no truth nugget found)
0-30 if contradicts truth nugget (drift detected)
0 if factually incorrect
}

Step 2: Aggregation

Claims are weighted and averaged:

Overall Score = (sum of weighted claim scores) / (total possible score)

Weighting Factors:

  • Claim Importance: Critical claims (pricing, safety, legality) weighted heavier
  • Claim Prominence: Claims in headers/title weighted more than body text
  • Claim Frequency: Repeated claims reinforced (or flagged if inconsistent)

Example Calculation

Content: “TruthVouch costs $349/month, was founded in 2023, and monitors 9+ AI models.”

ClaimMatchScoreWeightWeighted
”Costs $349/month”Exact1001.5 (critical)150
”Founded in 2023”Semantic851.085
”Monitors 9+ models”Exact1001.2120
Total355/260 = 82/100

What Affects Your Score

Factors That Increase Score

  • Explicit Truth Nuggets: Having documented truth nuggets for claims
  • Recent Updates: Truth nuggets updated recently show currency
  • Multiple Sources: Claims backed by multiple truth nuggets
  • High Confidence Matches: Semantic similarity >90%
  • Claim Clarity: Well-written, unambiguous claims

Factors That Decrease Score

  • Missing Truth Nuggets: Claims without matching truth nuggets (default 50 points)
  • Outdated Information: Truth nuggets older than 6 months
  • Drift Detected: Claims contradict your truth nuggets (0 points)
  • Ambiguous Claims: Claims that are vague or overgeneralized
  • Conflicting Statements: Multiple truth nuggets contradicting each other

Unverified Claims

Claims without matching truth nuggets receive a default score:

  • Strict Mode: 0 points (treated as unverified)
  • Balanced Mode: 50 points (neutral score)
  • Lenient Mode: 75 points (assumed correct unless contradicted)

Configuration:

client.certification.update_settings(
strictness="balanced",
unverified_claim_score=50
)

Interpreting Your Score

High Score (81-100)

Your content is well-documented and accurate:

  • Safe to publish/share
  • Minimal revision needed
  • Good candidate for customer-facing materials
  • Consider promoting this content template

Medium Score (61-80)

Your content is mostly accurate but has gaps:

  • Review unverified claims
  • Add missing truth nuggets if claims are valid
  • Update outdated information
  • Suitable for internal use or with disclaimers

Low Score (0-60)

Your content has significant accuracy issues:

  • Review all flagged claims carefully
  • Identify and fix hallucinations
  • Add truth nuggets for unverified claims
  • Not recommended for publication without revisions

Score Breakdown

View detailed score breakdown in certificate dashboard:

By Category:

Product Information: 92/100
- Pricing: 100/100
- Features: 95/100
- Availability: 78/100
Company Information: 85/100
- History: 90/100
- Locations: 85/100
- Team: 75/100
Performance Claims: 68/100
- Speed: 50/100 (unverified)
- Reliability: 80/100
- Scalability: 70/100

By Confidence:

High Confidence (95%+): 47 claims, Avg Score 98/100
Medium Confidence (70-95%): 12 claims, Avg Score 82/100
Low Confidence (<70%): 5 claims, Avg Score 45/100

Improving Your Score

1. Add Truth Nuggets

Create truth nuggets for unverified claims:

# Find unverified claims
report = client.certification.get_verification_report(cert_id)
for claim in report.unverified_claims:
print(f"Add truth nugget: {claim.text}")
# Create truth nugget
client.truth_nuggets.create(
category="performance",
key="response_latency",
value="Sub-200ms average response time",
sources=["https://blog.truthvouch.com/performance"]
)
# Re-verify certificate
client.certification.reverify(cert_id)

2. Update Outdated Information

Refresh truth nuggets that are out of date:

# Update pricing
client.truth_nuggets.update(
nugget_id="pricing_starter",
value="$349/month"
)
# Certificates automatically detect and update

3. Fix Hallucinations

Review drift alerts and correct inaccurate content:

# Get drift report
drift = client.certification.get_drift_report(cert_id)
for drift_claim in drift.drifted_claims:
print(f"Issue: {drift_claim.text}")
print(f"Expected: {drift_claim.truth_nugget}")

4. Add Source Evidence

Strengthen matching by linking truth nuggets to authoritative sources:

client.truth_nuggets.update(
nugget_id="founded_2023",
sources=[
"https://crunchbase.com/...",
"https://blog.truthvouch.com/launch",
"SEC filing 10-K 2024"
]
)

Historical Scoring

Track score changes over time:

Dashboard: Navigate to Certification → Select certificate → Score History

API:

history = client.certification.get_score_history(
certificate_id="cert-123",
days=90
)
for entry in history:
print(f"{entry.date}: {entry.score}/100")
print(f" Changes: {entry.drift_count} drifts, {entry.updates_count} updates")

Next Steps

  • Badge Customization: Customize how your score is displayed
  • Auto-Revocation: Set thresholds for automatic certificate revocation
  • Monitoring: Set up alerts for score changes
  • Batch Analysis: Compare scores across multiple certificates