LangChain Integration
The LangChain integration for TruthVouch provides drop-in components for verifying hallucinations and checking fact accuracy in LangChain agents and chains.
Installation
pip install langchain-vouchedtruthComponents
VouchedTruthGuard
A simple callable that verifies text before using it:
from langchain_vouchedtruth import VouchedTruthGuard
guard = VouchedTruthGuard(threshold=0.8)result = guard("The Eiffel Tower is located in Berlin.")
print(result.passed) # False — low trust scoreprint(result.trust_score) # e.g. 0.21print(result.claims) # list of per-claim verification resultsVouchedTruthRetriever
Wraps a document retriever and filters results by trust score:
from langchain_core.documents import Documentfrom langchain_vouchedtruth import VouchedTruthRetriever
retriever = VouchedTruthRetriever( api_key="tv_live_...", threshold=0.75, mode="spot_check",)
docs = retriever.get_relevant_documents("What is the capital of France?")# Documents below threshold are filtered out# Each doc has doc.metadata["trust_score"] setVouchedTruthCallbackHandler
Automatically verifies LLM responses via a LangChain callback:
from langchain_openai import ChatOpenAIfrom langchain_vouchedtruth import VouchedTruthCallbackHandler
handler = VouchedTruthCallbackHandler( api_key="tv_live_...", threshold=0.8, raise_on_low_trust=False, # Set True to raise on low score)
llm = ChatOpenAI(callbacks=[handler])response = llm.invoke("Tell me about the Eiffel Tower.")# Response is automatically fact-checkedTrustApiClient
Low-level async/sync client for manual Trust API calls:
from langchain_vouchedtruth import TrustApiClient
client = TrustApiClient(api_key="tv_live_...")result = client.verify_sync("Paris is the capital of France.", mode="standard")
print(result["trustScore"]) # 0.97print(result["claims"]) # Verified claims with scoresCommon Patterns
Verify RAG Context
Before passing retrieved documents to the LLM, verify their accuracy:
from langchain_vouchedtruth import VouchedTruthRetrieverfrom langchain_openai import ChatOpenAIfrom langchain.chains import RetrievalQA
retriever = VouchedTruthRetriever( api_key="tv_live_...", threshold=0.75,)
llm = ChatOpenAI(model="gpt-4o")
qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, # Filters docs by trust score)
response = qa.run("What is quantum computing?")Post-Response Fact-Checking
Verify LLM output after generation:
from langchain_openai import ChatOpenAIfrom langchain_vouchedtruth import VouchedTruthGuard
llm = ChatOpenAI(model="gpt-4o")guard = VouchedTruthGuard(threshold=0.8)
response = llm.invoke("Explain photosynthesis")result = guard(response.content)
if not result.passed: print(f"Warning: Low trust score {result.trust_score}") print(f"Flagged issues: {result.claims}")Agent with Fact-Checking
Add verification to agent action outputs:
from langchain.agents import AgentExecutor, create_react_agentfrom langchain_openai import ChatOpenAIfrom langchain_vouchedtruth import VouchedTruthGuardfrom langchain import hub
llm = ChatOpenAI(model="gpt-4o")tools = [...] # Your tools
prompt = hub.pull("hwchase17/react")agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True,)
# Wrap execution with post-checkguard = VouchedTruthGuard(threshold=0.75)
class FactCheckExecutor: def run(self, query): result = executor.invoke({"input": query}) output = result.get("output", "")
guard_result = guard(output) if not guard_result.passed: return { "output": output, "warning": f"Low trust score: {guard_result.trust_score}", "issues": guard_result.claims, } return result
# Use FactCheckExecutor instead of executorConfiguration
Environment Variables
VOUCHEDTRUTH_API_KEY=tv_live_...VOUCHEDTRUTH_BASE_URL=http://localhost:5004/api/v1/trustScan Modes
Choose the verification mode for your use case:
| Mode | Speed | Cost | Best For |
|---|---|---|---|
spot_check | Fast | Low | Quick sampling, high-volume queries |
standard | Medium | Medium | Balanced accuracy and performance |
deep | Slow | High | High-stakes decisions, compliance |
guard = VouchedTruthGuard( threshold=0.8, mode="standard" # or "spot_check", "deep")Trust Score Threshold
Set the minimum trust score for acceptance:
# Strict: require 90+ confidenceguard = VouchedTruthGuard(threshold=0.9)
# Relaxed: allow 50+ confidenceguard = VouchedTruthGuard(threshold=0.5)
# Default: 80 confidenceguard = VouchedTruthGuard(threshold=0.8)Error Handling
from langchain_vouchedtruth import ( VouchedTruthGuard, TrustApiClient, VouchedTruthError,)
guard = VouchedTruthGuard(threshold=0.8)
try: result = guard("Some text to verify") if not result.passed: print(f"Trust score: {result.trust_score}")except VouchedTruthError as e: print(f"Verification error: {e}")Logging
Enable debug logging to see verification calls:
import logging
logging.basicConfig(level=logging.DEBUG)logger = logging.getLogger("langchain_vouchedtruth")logger.setLevel(logging.DEBUG)
# Now you'll see:# DEBUG: Verifying text: "The Eiffel Tower..."# DEBUG: Trust score: 0.95Complete RAG Example
import asynciofrom langchain_community.document_loaders import TextLoaderfrom langchain_text_splitters import CharacterTextSplitterfrom langchain_openai import OpenAIEmbeddingsfrom langchain_community.vectorstores import FAISSfrom langchain_openai import ChatOpenAIfrom langchain.chains import RetrievalQAfrom langchain_vouchedtruth import VouchedTruthRetriever, VouchedTruthGuard
async def main(): # Load and chunk documents loader = TextLoader("knowledge.txt") documents = loader.load()
splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50) chunks = splitter.split_documents(documents)
# Create vector store embeddings = OpenAIEmbeddings() vector_store = FAISS.from_documents(chunks, embeddings)
# Wrap retriever with TruthVouch verification base_retriever = vector_store.as_retriever() safe_retriever = VouchedTruthRetriever( base_retriever=base_retriever, api_key="tv_live_...", threshold=0.75, )
# Create QA chain with verified retrieval llm = ChatOpenAI(model="gpt-4o") qa_chain = RetrievalQA.from_chain_type( llm=llm, retriever=safe_retriever, )
# Verify final response guard = VouchedTruthGuard(threshold=0.8)
query = "What is the capital of France?" response = qa_chain.run(query) result = guard(response)
print(f"Answer: {response}") print(f"Trust Score: {result.trust_score}") if not result.passed: print(f"Warnings: {result.claims}")
if __name__ == "__main__": asyncio.run(main())Performance Tips
- Batch verification: For multiple documents, use the batch API
- Cache results: Store trust scores in Redis to avoid re-checking
- Adjust threshold: Lower thresholds for exploratory queries, higher for fact-sensitive content
- Use spot_check mode: For large datasets, use quick sampling instead of deep verification