SOC 2 Trust Services Criteria
SOC 2 is a compliance framework for service organizations (SaaS, cloud, managed services). It establishes Trust Services Criteria (TSC) — five trust principles covering security, availability, processing integrity, confidentiality, and privacy. TruthVouch automates SOC 2 compliance by mapping AI controls to TSC requirements and collecting auditor-ready evidence.
What Is SOC 2?
SOC 2 is a framework developed by the AICPA (American Institute of Certified Public Accountants) for service organizations to demonstrate controls over user data and systems.
The Five Trust Principles
| Principle | Focus | Applies to AI |
|---|---|---|
| Security (CC) | Protect system assets from unauthorized access | Yes — all systems |
| Availability (A) | System availability and performance | Yes — AI service uptime |
| Processing Integrity (PI) | Data processing is accurate and complete | Yes — model outputs, decisions |
| Confidentiality (C) | Sensitive data protected from unauthorized disclosure | Yes — training data, customer data |
| Privacy (P) | Personal data handled per stated policies | Yes — data processing in AI systems |
Type I vs. Type II
- Type I — Auditor reviews controls at a point in time (single day)
- Type II — Auditor reviews controls over 6-12 months, observing sustained compliance
For AI: Type II is standard. Auditors need to see ongoing monitoring, incident response, and control effectiveness over time.
SOC 2 Trust Criteria for AI Systems
Compliance AI maps AI-specific controls to SOC 2 TSC:
Security (CC) Controls
| Criterion | What It Means | AI Application |
|---|---|---|
| CC1 | Entity obtains/generates info on effectiveness of controls | Performance monitoring of AI systems |
| CC2 | Entity defines objectives and responsibilities for internal control | AI governance policy, roles |
| CC3 | Entity specifies objectives with sufficient clarity | AI system requirements documented |
| CC4 | Entity identifies and analyzes risks to objectives | AI risk assessment |
| CC5 | Entity selects and develops control activities | Model testing, monitoring, human review |
| CC6 | Entity defines and implements control activities | Implementing safeguards |
| CC7 | Entity defines and implements IT general controls | Infrastructure security, access control |
| CC8 | Entity authorizes, designs, develops, configures, documents, tests, and maintains IT systems | Change management for model updates |
| CC9 | Entity identifies, captures, and communicates relevant information | Incident reporting, audit logs |
Availability (A) Controls
| Criterion | What It Means | AI Application |
|---|---|---|
| A1 | Entity obtains or generates, uses, and communicates relevant data | AI performance metrics, SLA tracking |
| A2 | Entity monitors system components for anomalies | Model drift detection, performance monitoring |
Processing Integrity (PI) Controls
| Criterion | What It Means | AI Application |
|---|---|---|
| PI1 | Entity obtains/generates, uses, and communicates relevant quality information | Model quality metrics, testing results |
| PI2 | Entity monitors system performance | Continuous performance monitoring |
Confidentiality (C) Controls
| Criterion | What It Means | AI Application |
|---|---|---|
| C1 | Entity obtains/generates, uses, communicates relevant information | Training data classification |
| C2 | Entity disposes of information to prevent unauthorized access | Data deletion procedures |
Privacy (P) Controls
| Criterion | What It Means | AI Application |
|---|---|---|
| P1 | Entity provides or makes available privacy notice | Policy on how AI systems use personal data |
| P2 | Entity obtains implicit or explicit consent | User consent for AI processing |
| P3 | Entity identifies and manages consent | Tracking consent withdrawals |
| P4 | Entity grants identified and authenticated users ability to access, retrieve, and transmit their stored personal information | Data subject access requests |
| P5 | Entity retains, disposes of, and securely destroys personal information | Data retention policies |
SOC 2 Audit Process for AI
Pre-Audit (1-2 months before)
-
Assessment — Compliance AI runs SOC 2 readiness scan
- Identifies which TSC you are/aren’t meeting
- Flags gaps in documentation or monitoring
- Prioritizes remediation
-
Documentation — Compile evidence:
- Policies (security, data handling, change management)
- Procedures (access requests, incident response, training)
- System descriptions (AI systems, architecture, data flows)
- Risk assessment results
- Test/monitoring logs (last 6+ months)
-
Evidence Collection — Use Compliance AI connectors:
- Infrastructure logs (AWS, Azure, GCP, Kubernetes)
- Access logs (GitHub, Okta, Active Directory)
- Monitoring dashboards (Datadog, CloudWatch, Prometheus)
- Incident logs (Slack, ServiceNow, incident tracking)
- Training records (completion attestations)
-
Management Review — Document evidence organization:
- Assigned responsible teams for each control
- Sign-off that controls are in place
- Commitment to maintaining controls during audit period
Audit (1-2 weeks on-site)
- Planning meeting — Auditor understands scope and AI systems
- Document review — Auditor reviews all policies, procedures, risk assessments
- Tests of design — Does the control structure make sense?
- Tests of operating effectiveness — Are controls actually working?
- Sample test: Auditor picks a log entry from production AI system
- Traces it through system (access control, data encryption, audit trail)
- Validates no unauthorized access occurred
- Incident response — If incidents occurred, auditor reviews response
- Management interviews — Auditor talks to relevant staff
- Report generation — Auditor issues opinion
Post-Audit
- Type II: Certificate valid for 1 year; plan next audit
- Remediation: If auditor finds control gaps, implement fixes and provide evidence of remediation
AI-Specific SOC 2 Focus Areas
Auditors increasingly focus on these AI-specific areas:
1. Model Training Data & Quality
What auditor checks:
- Where does training data come from?
- Is training data biased or poor quality?
- How is data quality validated?
- Is biased data detected and remediated?
Evidence:
- Data sourcing documentation
- Bias testing results
- Quality metrics (completeness, accuracy)
- Retraining frequency
TruthVouch support: Automated bias testing reports, data quality dashboards
2. Model Performance Monitoring
What auditor checks:
- Is model performance tracked post-deployment?
- What triggers retraining or model rollback?
- Do alerts fire when performance degrades?
- Is degradation investigated and documented?
Evidence:
- Performance dashboards (accuracy, latency, recall)
- Monitoring configuration
- Alert logs
- Incident response records
TruthVouch support: Auto-connects to monitoring tools (Datadog, Prometheus, CloudWatch)
3. Audit Trail & Logging
What auditor checks:
- Are all system accesses logged?
- Can you trace a model decision back to inputs?
- Is the audit log immutable (hash-chained)?
- Is the log retained per policy?
Evidence:
- Sample audit trail export
- Hash verification (showing tamper-proof)
- Log retention policy
- Log deletion procedures
TruthVouch support: Hash-chained audit trails, WORM (Write-Once-Read-Many) export
4. Change Management
What auditor checks:
- How are model updates tested?
- Is there a rollback procedure?
- Are model changes tracked?
- Do only authorized people approve changes?
Evidence:
- Change log (who changed what, when)
- Test results for each change
- Approval records
- Rollback procedures documented
TruthVouch support: Change management workflow, approval tracking
5. Access Control
What auditor checks:
- Who can access model code, training data, production outputs?
- Are access levels reviewed annually?
- Is access revoked when staff leave?
- Are multi-factor authentication and strong passwords enforced?
Evidence:
- Access control policy
- Current access list (who has what permissions)
- Annual review attestation
- Access removal records
- MFA configuration
TruthVouch support: Integration with IAM systems (Okta, Azure AD)
Typical SOC 2 Timeline
| Phase | Duration | Effort |
|---|---|---|
| Assessment & Remediation | 2-3 months | 100-200 hours |
| Evidence Collection & Documentation | 2 months | 50-100 hours |
| Pre-Audit | 2 weeks | 20-40 hours |
| Audit (Type II) | 6-12 months + 1-2 weeks on-site | 40-60 hours (interviews) |
| Post-Audit Remediation | 2-4 weeks | 20-40 hours |
| Total | 6-15 months | 250-450 hours |
Comparing SOC 2 to ISO 27001
| Aspect | SOC 2 | ISO 27001 |
|---|---|---|
| Scope | Service organizations (SaaS) | Any organization |
| Structure | 5 trust principles | 14 control groups (92 controls) |
| Certification | Opinion audit (no “cert” badge) | Certification available |
| Focus | Service user protection | Information security |
| Adoption | Standard for SaaS; US-centric | Global; often used with SOC 2 |
Best practice: Many organizations do ISO 27001 first (more comprehensive), then SOC 2 audit (validates ISO 27001 effectiveness).
Next Steps
- Start SOC 2 readiness: Go to Compliance > Frameworks > SOC 2 > Assessment
- Evidence collection: Evidence Connectors
- Prepare for audit: Audit-Ready Reports