Skip to content

Python SDK Quick Start

The TruthVouch Python SDK provides drop-in replacements for OpenAI, Anthropic, and Google AI providers. All LLM calls are automatically governed with fact-checking, PII detection, and policy enforcement.

Installation

Terminal window
pip install truthvouch

Requires Python 3.9+.

Basic Setup

1. Get an API Key

  1. Go to TruthVouch dashboard → Settings → API Keys
  2. Click Generate New Key
  3. Choose a Test key for development, Live key for production
  4. Copy the key (it won’t be shown again)

2. Store the Key Securely

Create a .env file in your project (never commit this):

Terminal window
TRUTHVOUCH_API_KEY=tv_live_7e9c4a2b8f1d5c3a9e7b2f4d6c1a8e3b

Load it in your code:

import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.environ["TRUTHVOUCH_API_KEY"]

3. Initialize the Client

from truthvouch import TruthVouchClient
client = TruthVouchClient(
gateway_url="https://gateway.truthvouch.com",
api_key=api_key
)

Or use async context manager (recommended for cleanup):

async with TruthVouchClient(
gateway_url="https://gateway.truthvouch.com",
api_key=api_key
) as client:
# Use client here
pass

Drop-In Provider Replacement

OpenAI

Before (direct OpenAI):

from openai import AsyncOpenAI
client = AsyncOpenAI()
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is quantum computing?"}]
)
print(response.choices[0].message.content)

After (with TruthVouch governance):

from truthvouch import TruthVouchClient
async with TruthVouchClient(api_key="tv_live_...") as client:
response = await client.openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is quantum computing?"}]
)
print(response.content)
print(response.governance.verdict) # "allowed" or "blocked"

Zero code changes to your prompt, temperature, max_tokens, or other parameters — the API is identical.

Anthropic

async with TruthVouchClient(api_key="tv_live_...") as client:
response = await client.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain AI governance"}]
)
print(response.content[0].text)
print(response.governance.verdict)

Google AI

async with TruthVouchClient(api_key="tv_live_...") as client:
response = await client.google.generate_content(
"Explain AI governance",
model="gemini-1.5-pro"
)
print(response.text)
print(response.governance.verdict)

Streaming

Stream responses word-by-word with governance applied:

async with TruthVouchClient(api_key="tv_live_...") as client:
stream = await client.openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a haiku about AI"}],
stream=True
)
async for chunk in stream:
if chunk.content:
print(chunk.content, end="", flush=True)
if chunk.done:
print()
print(f"Verdict: {chunk.governance_report.verdict}")
print(f"Alerts: {len(chunk.governance_report.alerts)}")

Manual Content Scanning

For post-hoc fact-checking of any text (not necessarily from an LLM):

result = await client.scan(
prompt="Tell me about the founding of NASA",
response="NASA was founded in 1958 by President Eisenhower"
)
print(f"Verdict: {result.verdict}") # "allowed", "blocked", or "warning"
print(f"Trust Score: {result.trust_score}") # 0.0 to 1.0
print(f"Alerts: {result.alerts}") # List of detected issues

Response format:

{
"verdict": "allowed", # Governance decision
"trust_score": 0.92, # 0.0 to 1.0 (lower = less trustworthy)
"alerts": [
{
"severity": "medium",
"type": "hallucination",
"fact": "NASA was founded in 1958",
"message": "NASA was actually founded in 1958, but Eisenhower signed NACA transformation, not initial founding"
}
],
"pii_detected": False,
"pii_entities": []
}

Batch Scanning

For scanning large datasets offline:

# Submit a batch job
job = await client.batch.submit(
source_url="s3://my-bucket/documents.jsonl", # or gs://, file://
format="jsonl",
scan_mode="deep", # "fast", "standard", or "deep"
callback_url="https://myapp.com/webhooks/scan-complete" # optional
)
print(f"Job ID: {job.id}")
print(f"Status: {job.status}")
print(f"Estimated completion: {job.estimated_completion}")
# Poll for status
import asyncio
while True:
status = await client.batch.get_status(job.id)
print(f"Progress: {status.progress_percent}% ({status.scanned}/{status.total})")
if status.completed:
print(f"Flagged items: {status.flagged_count}")
break
await asyncio.sleep(10)

Input format (JSONL):

{"prompt": "Tell me about Paris", "response": "Paris is the capital of France"}
{"prompt": "What is 2+2?", "response": "2+2 equals 4"}

Error Handling

Handle governance blocks and other errors:

from truthvouch import (
PolicyBlockedError,
GatewayUnreachableError,
AuthenticationError,
RateLimitError,
TruthVouchError,
)
try:
response = await client.openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "..."}]
)
except PolicyBlockedError as e:
# Request was blocked by a governance policy
report = e.governance_report
print(f"Blocked by policy: {report.policy_id}")
print(f"Reason: {report.policy_violation_reason}")
except RateLimitError as e:
# Rate limit exceeded
print(f"Rate limited. Retry after {e.retry_after} seconds")
except GatewayUnreachableError as e:
# Gateway is down
print(f"Gateway unreachable: {e.message}")
# Check your fail_mode setting
except AuthenticationError as e:
# Invalid or expired API key
print(f"Auth failed: {e.message}")
except TruthVouchError as e:
# Generic error
print(f"Error: {e.message}")

Configuration

Gateway and Timeout

client = TruthVouchClient(
gateway_url="https://gateway.truthvouch.com",
api_key="tv_live_...",
timeout=30.0, # HTTP timeout in seconds
max_retries=3 # Retry attempts for transient errors
)

Circuit Breaker

For production, configure fail mode and circuit breaker:

client = TruthVouchClient(
api_key="tv_live_...",
fail_mode="open", # "open": bypass on failure, "closed": raise error
circuit_breaker_threshold=5, # Failures before circuit opens
circuit_breaker_recovery_seconds=60 # Seconds before recovery probe
)
fail_modeCircuit Open Behavior
"open" (default)Bypass gateway, call provider directly (degraded but operational)
"closed"Raise GatewayUnreachableError (safe but blocks traffic)

SSL Verification

For self-hosted deployments:

client = TruthVouchClient(
api_key="tv_live_...",
verify_ssl=False # Only for testing/self-signed certs
)

Environment Variables

Configure SDK behavior via environment variables:

Terminal window
TRUTHVOUCH_API_KEY=tv_live_...
TRUTHVOUCH_GATEWAY_URL=https://gateway.truthvouch.com
TRUTHVOUCH_TIMEOUT_MS=30000
TRUTHVOUCH_FAIL_MODE=open
TRUTHVOUCH_MAX_RETRIES=3

Load automatically:

client = TruthVouchClient() # Reads from env vars

Complete Example

import asyncio
import os
from dotenv import load_dotenv
from truthvouch import TruthVouchClient, PolicyBlockedError, RateLimitError
async def main():
load_dotenv()
api_key = os.environ["TRUTHVOUCH_API_KEY"]
async with TruthVouchClient(api_key=api_key) as client:
# OpenAI drop-in
try:
response = await client.openai.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": "You are a helpful assistant about AI governance."
},
{
"role": "user",
"content": "What is hallucination in AI?"
}
],
temperature=0.7,
max_tokens=500
)
print("Response:", response.content)
print("Verdict:", response.governance.verdict)
if response.governance.alerts:
print("Alerts detected:")
for alert in response.governance.alerts:
print(f" - {alert.severity}: {alert.message}")
except PolicyBlockedError as e:
print(f"Request blocked: {e.governance_report.policy_id}")
except RateLimitError as e:
print(f"Rate limited, retry after {e.retry_after}s")
except Exception as e:
print(f"Error: {e}")
# Manual scan
scan_result = await client.scan(
prompt="Who won the 2024 US Presidential election?",
response="The 2024 US Presidential election winner was announced in November 2024."
)
print(f"\nManual scan verdict: {scan_result.verdict}")
print(f"Trust score: {scan_result.trust_score}")
if __name__ == "__main__":
asyncio.run(main())

Next Steps