Skip to content

LangChain Integration

The LangChain integration for TruthVouch provides drop-in components for verifying hallucinations and checking fact accuracy in LangChain agents and chains.

Installation

Terminal window
pip install langchain-vouchedtruth

Components

VouchedTruthGuard

A simple callable that verifies text before using it:

from langchain_vouchedtruth import VouchedTruthGuard
guard = VouchedTruthGuard(threshold=0.8)
result = guard("The Eiffel Tower is located in Berlin.")
print(result.passed) # False — low trust score
print(result.trust_score) # e.g. 0.21
print(result.claims) # list of per-claim verification results

VouchedTruthRetriever

Wraps a document retriever and filters results by trust score:

from langchain_core.documents import Document
from langchain_vouchedtruth import VouchedTruthRetriever
retriever = VouchedTruthRetriever(
api_key="tv_live_...",
threshold=0.75,
mode="spot_check",
)
docs = retriever.get_relevant_documents("What is the capital of France?")
# Documents below threshold are filtered out
# Each doc has doc.metadata["trust_score"] set

VouchedTruthCallbackHandler

Automatically verifies LLM responses via a LangChain callback:

from langchain_openai import ChatOpenAI
from langchain_vouchedtruth import VouchedTruthCallbackHandler
handler = VouchedTruthCallbackHandler(
api_key="tv_live_...",
threshold=0.8,
raise_on_low_trust=False, # Set True to raise on low score
)
llm = ChatOpenAI(callbacks=[handler])
response = llm.invoke("Tell me about the Eiffel Tower.")
# Response is automatically fact-checked

TrustApiClient

Low-level async/sync client for manual Trust API calls:

from langchain_vouchedtruth import TrustApiClient
client = TrustApiClient(api_key="tv_live_...")
result = client.verify_sync("Paris is the capital of France.", mode="standard")
print(result["trustScore"]) # 0.97
print(result["claims"]) # Verified claims with scores

Common Patterns

Verify RAG Context

Before passing retrieved documents to the LLM, verify their accuracy:

from langchain_vouchedtruth import VouchedTruthRetriever
from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA
retriever = VouchedTruthRetriever(
api_key="tv_live_...",
threshold=0.75,
)
llm = ChatOpenAI(model="gpt-4o")
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever, # Filters docs by trust score
)
response = qa.run("What is quantum computing?")

Post-Response Fact-Checking

Verify LLM output after generation:

from langchain_openai import ChatOpenAI
from langchain_vouchedtruth import VouchedTruthGuard
llm = ChatOpenAI(model="gpt-4o")
guard = VouchedTruthGuard(threshold=0.8)
response = llm.invoke("Explain photosynthesis")
result = guard(response.content)
if not result.passed:
print(f"Warning: Low trust score {result.trust_score}")
print(f"Flagged issues: {result.claims}")

Agent with Fact-Checking

Add verification to agent action outputs:

from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import ChatOpenAI
from langchain_vouchedtruth import VouchedTruthGuard
from langchain import hub
llm = ChatOpenAI(model="gpt-4o")
tools = [...] # Your tools
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
)
# Wrap execution with post-check
guard = VouchedTruthGuard(threshold=0.75)
class FactCheckExecutor:
def run(self, query):
result = executor.invoke({"input": query})
output = result.get("output", "")
guard_result = guard(output)
if not guard_result.passed:
return {
"output": output,
"warning": f"Low trust score: {guard_result.trust_score}",
"issues": guard_result.claims,
}
return result
# Use FactCheckExecutor instead of executor

Configuration

Environment Variables

Terminal window
VOUCHEDTRUTH_API_KEY=tv_live_...
VOUCHEDTRUTH_BASE_URL=http://localhost:5004/api/v1/trust

Scan Modes

Choose the verification mode for your use case:

ModeSpeedCostBest For
spot_checkFastLowQuick sampling, high-volume queries
standardMediumMediumBalanced accuracy and performance
deepSlowHighHigh-stakes decisions, compliance
guard = VouchedTruthGuard(
threshold=0.8,
mode="standard" # or "spot_check", "deep"
)

Trust Score Threshold

Set the minimum trust score for acceptance:

# Strict: require 90+ confidence
guard = VouchedTruthGuard(threshold=0.9)
# Relaxed: allow 50+ confidence
guard = VouchedTruthGuard(threshold=0.5)
# Default: 80 confidence
guard = VouchedTruthGuard(threshold=0.8)

Error Handling

from langchain_vouchedtruth import (
VouchedTruthGuard,
TrustApiClient,
VouchedTruthError,
)
guard = VouchedTruthGuard(threshold=0.8)
try:
result = guard("Some text to verify")
if not result.passed:
print(f"Trust score: {result.trust_score}")
except VouchedTruthError as e:
print(f"Verification error: {e}")

Logging

Enable debug logging to see verification calls:

import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("langchain_vouchedtruth")
logger.setLevel(logging.DEBUG)
# Now you'll see:
# DEBUG: Verifying text: "The Eiffel Tower..."
# DEBUG: Trust score: 0.95

Complete RAG Example

import asyncio
from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain_vouchedtruth import VouchedTruthRetriever, VouchedTruthGuard
async def main():
# Load and chunk documents
loader = TextLoader("knowledge.txt")
documents = loader.load()
splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50)
chunks = splitter.split_documents(documents)
# Create vector store
embeddings = OpenAIEmbeddings()
vector_store = FAISS.from_documents(chunks, embeddings)
# Wrap retriever with TruthVouch verification
base_retriever = vector_store.as_retriever()
safe_retriever = VouchedTruthRetriever(
base_retriever=base_retriever,
api_key="tv_live_...",
threshold=0.75,
)
# Create QA chain with verified retrieval
llm = ChatOpenAI(model="gpt-4o")
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
retriever=safe_retriever,
)
# Verify final response
guard = VouchedTruthGuard(threshold=0.8)
query = "What is the capital of France?"
response = qa_chain.run(query)
result = guard(response)
print(f"Answer: {response}")
print(f"Trust Score: {result.trust_score}")
if not result.passed:
print(f"Warnings: {result.claims}")
if __name__ == "__main__":
asyncio.run(main())

Performance Tips

  1. Batch verification: For multiple documents, use the batch API
  2. Cache results: Store trust scores in Redis to avoid re-checking
  3. Adjust threshold: Lower thresholds for exploratory queries, higher for fact-sensitive content
  4. Use spot_check mode: For large datasets, use quick sampling instead of deep verification

Next Steps