Skip to content

Govern Employee AI Usage

Your employees are using ChatGPT, Copilot, Claude, Cursor, and dozens of other AI tools — and you have no idea what they’re sharing. Customer data in prompts, credentials in conversations, company secrets pasted into free versions. This guide shows you how to govern all employee AI usage with policies, DLP masking, and audit trails.

Overview

AI Governance controls what employees can do with AI tools:

  1. Policy Engine: Define rules as code — who can use which AI tools, with what guardrails
  2. Firewall: Enforce policies in real-time (sub-200ms) for application LLM calls
  3. Sentinel Agent: Monitor and govern all employee AI tools (ChatGPT, Copilot, Claude, etc.)
  4. Audit Trail: Hash-chained record of every request, response, and policy decision
  5. Board Reports: Compliance proof for auditors and executives

The result: 100% governance of all AI traffic in your organization, with zero developer burden and no performance impact.

Prerequisites

  1. TruthVouch account with Business tier or higher (Governance included)
  2. Basic policies defined (what data classification levels are allowed, which tools can be used, etc.)
  3. IT infrastructure access (to deploy Sentinel agent or configure SDK)

If you haven’t set these up, start with For CTOs & Engineering Leaders.

Step 1: Define Your Governance Policies

Create rules that govern AI usage. Policies are written in Rego, an open-source policy language.

  1. Go to Governance → Policy Engine → Create Policy
  2. Start with a basic policy template:
# Block unapproved AI tools
rule "allow_approved_tools" {
tools := ["chatgpt", "claude-web", "copilot"]
if input.tool not in tools {
deny("Unapproved AI tool. Approved: ChatGPT, Claude, Copilot")
}
}
# Mask PII before sending to AI
rule "mask_pii" {
patterns := ["ssn", "credit_card", "email", "phone"]
mask_sensitive(input.prompt, patterns)
}
# Block high-sensitivity data classifications
rule "data_classification_check" {
if input.data_classification == "top_secret"
deny("Cannot share top-secret data with AI tools")
if input.data_classification == "confidential"
require_approval() # Flag for manager approval
}
  1. Add custom rules for your organization:

    • Which teams can use which AI tools?
    • What data classifications are allowed?
    • Which employees need manager approval?
    • Should certain topics be blocked (e.g., financial forecasts, customer lists)?
  2. Test your policy:

    • Use the Policy Sandbox to test rules
    • Try examples: “Can a sales rep use ChatGPT?” → Yes (if tool is approved)
    • “Can an engineer share a customer API key?” → No (PII detected)
  3. Publish to Staging first, monitor for false positives, then publish to Production

Pro tip: Start permissive (allow most tools, mask PII), then gradually add restrictions based on your compliance requirements.

Step 2: Deploy for Application LLM Calls (Firewall SDK)

If your engineering teams use LLMs in production applications, deploy the Firewall SDK.

  1. Go to Governance → Firewall → SDK Setup
  2. Generate your API credentials (keep these secret)
  3. Install the SDK for your language:
Terminal window
# Python example
pip install truthvouch-governance
  1. Integrate into your application (3 lines of code):
from truthvouch_governance import GovernanceFirewall
firewall = GovernanceFirewall(api_key="your-api-key")
# Instead of:
# response = client.chat.completions.create(
# model="gpt-4",
# messages=[...]
# )
# Do this:
response = firewall.goverce_completion(
model="gpt-4",
messages=[...],
user_id="user@company.com", # For audit trail
data_classification="internal" # User's data level
)
  1. Deploy to staging, test for latency (should be <200ms), then production
  2. Monitor Governance Dashboard → SDK Metrics for:
    • Policies enforced (approved vs. blocked)
    • False positive rate
    • Latency impact

Important: The SDK is transparent to users. If a request is blocked by policy, they see an error message but don’t see which policy rule triggered it.

Step 3: Deploy for Employee AI Tools (Sentinel Agent)

If employees are using ChatGPT, Copilot, Claude, etc., deploy the Sentinel Agent on their workstations.

For Windows (Group Policy)

  1. Go to Governance → Sentinel → Windows Deployment
  2. Download the MSI installer and Sentinel configuration file
  3. Push via Group Policy:
    • Group Policy Editor → Create new policy
    • Computer Configuration → Administrative Templates → Custom → Add Sentinel MSI
    • Set for all users or specific departments
  4. Trigger GPO update and monitor for agent installation

For macOS (Jamf)

  1. Download the PKG installer from Governance → Sentinel → macOS Deployment
  2. Upload to your Jamf Pro instance
  3. Create a Deployment Policy with the Sentinel package
  4. Target specific devices or departments
  5. Deploy and monitor installation status

For Windows (Intune)

  1. Go to Governance → Sentinel → Intune Setup
  2. Download the Sentinel configuration package
  3. Create a new Device Compliance Policy in Intune
  4. Upload the Sentinel package as a required app
  5. Deploy to device groups

Manual Deployment

  1. Generate installer download links for each employee
  2. Send email with installation instructions
  3. Employees click link and install locally
  4. Sentinel appears in Governance Dashboard → Connected Agents within 5 minutes

Verify Deployment

  1. Go to Governance → Sentinel → Agent Status
  2. You’ll see:
    • Which employees have agents installed and active
    • Last check-in time
    • Agent version
    • Policy version running on each device

Expect 80-95% deployment success in first 2 weeks (some employees won’t have admin rights to install).

Step 4: Monitor Policy Enforcement

Watch your governance dashboard to see policies in action.

  1. Go to Governance → Dashboard

  2. You’ll see real-time metrics:

    • Requests processed: Total LLM calls + employee AI tool interactions
    • Policies enforced: How many matched a policy rule
    • Requests blocked: How many were denied by policy
    • Alerts triggered: High-risk interactions (top-secret data, unapproved tools)
  3. Drill down by:

    • User: Which employees are using AI tools most? Which triggered most alerts?
    • Tool: Which AI tools are most used? Which have most policy violations?
    • Policy: Which rule is triggering most? False positives?
    • Time: When is AI usage highest? Which teams?

Example dashboard might show:

  • 4,200 AI tool interactions this week
  • 92% approved (ChatGPT, Copilot, Claude)
  • 8% blocked (unapproved tools like personal Perplexity accounts)
  • 12 critical alerts (employees attempting to share customer data)

Step 5: Handle Alerts & Violations

When policies are violated, alerts are created. Your team reviews and responds.

  1. Go to Governance → Alerts

  2. Review alerts sorted by severity:

    • Critical: Blocked attempt to share top-secret data, unapproved tools with sensitive access
    • High: Blocked top-secret data access, potential credential sharing detected
    • Medium: Attempted use of unapproved tool, PII masking triggered
    • Low: Tool used outside approved list (still allowed by fallback policy)
  3. For each alert, decide:

    • Approve anyway: Employee gets exemption for this action
    • Approve permanently: Add user/team to whitelist for this policy
    • Document: Mark as understood (security incident for audit trail)
    • Block and notify employee: Prevent action and send warning
  4. Common scenarios:

    • Employee tries to use personal ChatGPT: Allow (internal tool, approved)
    • Employee tries to use Phind: Block (unapproved, engineering focus)
    • Employee shares email in prompt: Mask email and allow (PII masking rule)
    • Employee tries to share customer API key: Block and alert to Compliance Officer

Step 6: Generate Board Reports

Your executives and auditors want proof of governance.

  1. Go to Governance → Reports → Board Report

  2. Generate a report covering:

    • AI usage summary: Total interactions, top tools, top teams
    • Compliance: % of traffic governed, policies enforced, violations blocked
    • Security: PII masking instances, high-risk blocks, incident count
    • Trend: Month-over-month comparison, improvement areas
    • Audit trail: Sample of logged interactions (anonymized)
  3. Export as PDF for board presentations or auditor reviews

Example report snippet:

Q1 2026 AI Governance Summary:
- 542,000 AI tool interactions governed
- 100% of employee AI tool traffic monitored (512 agents deployed)
- 98% of requests approved
- 2% blocked (unapproved tools, policy violations)
- Zero data breaches related to AI tools
- PII masking prevented 12 instances of credential exposure
- Full audit trail: 542,000 complete records available

Step 7: Continuous Improvement

As you gather data, refine your policies.

  1. Monthly policy review:

    • Which rules block legitimate use? (Reduce false positives)
    • Which tools are most used? (Consider approving them)
    • Which data classifications cause most blocks? (Adjust rules or increase training)
  2. Quarterly governance assessment:

    • Run Governance Health Check to identify gaps
    • Add new approved tools as company expands
    • Update rules based on security incidents
  3. Annual compliance audit:

    • Export full audit trail (hash-chained, tamper-proof)
    • Generate audit report for SOC 2, ISO 42001, GDPR reviews
    • Document all policy changes and approvals

Real-World Example

Scenario: Your company uses ChatGPT for customer support summaries. You want governance but you don’t want to block productivity.

  1. Create policy:
rule "customer_support_chatgpt" {
if input.user in ["support@company.com"] {
allow("customer support team approved for ChatGPT")
}
}
rule "mask_customer_data" {
patterns := ["email", "phone", "account_id", "payment_method"]
mask_sensitive(input.prompt, patterns)
}
  1. Deploy Sentinel to support team laptops

  2. Monitor:

    • Support team uses ChatGPT to summarize calls
    • PII masking removes customer emails/phone from prompts automatically
    • Dashboard shows 100+ interactions/day from support team
    • Zero data leaks
  3. Board report: “Customer support AI usage: 100% governed, masked 856 PII instances, zero breaches”

Metrics to Track

  • Coverage: % of organization with Sentinel agent deployed (target: 95%+)
  • Policy compliance: % of requests approved by policy (target: 95%+)
  • Security incidents: Blocked high-risk requests (target: <1% of traffic)
  • Tool adoption: Which AI tools are employees using? (helps with procurement planning)
  • PII protection: Instances of PII masking (should trend up as employees learn)
  • Audit trail: Full compliance log available (required for SOC 2, ISO 42001)

Troubleshooting

SDK latency is too high (>500ms)

  • Check your network connection to the Governance Gateway
  • Verify API key is correct (invalid keys cause slower fallback)
  • Increase SDK timeout threshold temporarily
  • Contact support if latency persists

Too many false positives (legitimate requests blocked)

  • Review blocked rules in the dashboard
  • Adjust policy to whitelist common patterns
  • Retrain employees on acceptable use
  • Update rules after 2 weeks of feedback

Sentinel agent not installing on some machines

  • Check if users have admin rights (required for installation)
  • Verify macOS security settings (may require MDM approval)
  • Try manual installation link for employees with issues
  • Expected: 80-95% adoption in first month

Employees circumventing by using personal accounts

  • Educate team on security policies (why they exist)
  • Add network-level blocking for personal ChatGPT accounts (optional, Sentinel can detect)
  • Offer approved alternatives (official ChatGPT team account)
  • Track circumvention attempts in audit logs

Next Steps

Questions? Reach out to your Solutions Engineer or post in the in-app support chat.