Compliance is where security meets bureaucracy — and if you handle it wrong, it’ll eat your engineering team alive. I’ve watched teams spend months manually screenshotting AWS consoles to prove compliance for an auditor. That’s insane. Compliance should be code, not spreadsheets.
This article is about treating compliance like any other engineering problem: automate it, test it, and make it continuous.
The Compliance Challenge
SOC2 and ISO 27001 share a common pattern: they define controls (security requirements) and demand evidence (proof you’re meeting them). The traditional approach:
- Auditor sends a 200-item checklist
- Engineers spend 3 months gathering screenshots
- Auditor finds gaps, engineers scramble
- Repeat annually
The modern approach: continuous compliance — your infrastructure proves compliance automatically, every day.
SOC2 Overview — Trust Service Criteria
SOC2 is organized around five Trust Service Criteria:
| Criteria | What It Covers | AWS Services |
|---|---|---|
| Security | Protection against unauthorized access | IAM, GuardDuty, WAF, Security Hub |
| Availability | System uptime and performance | CloudWatch, Auto Scaling, Route 53 |
| Processing Integrity | Accurate and complete processing | CloudTrail, Lambda DLQ, validation |
| Confidentiality | Protection of confidential data | KMS, S3 encryption, VPC |
| Privacy | Personal information handling | Macie, DLP, access controls |
The key insight: every SOC2 control maps to something you can check programmatically.
ISO 27001 Overview
ISO 27001 uses an Information Security Management System (ISMS) with 93 controls across four domains:
- Organizational (37 controls) — policies, roles, threat intelligence
- People (8 controls) — screening, awareness, remote work
- Physical (14 controls) — perimeters, equipment, utilities
- Technological (34 controls) — access control, cryptography, logging
The technological controls are highly automatable. The organizational and people controls still need process — but evidence collection can be automated.
Policy-as-Code with OPA
Open Policy Agent (OPA) lets you write compliance policies as code. Instead of a spreadsheet saying “all S3 buckets must be encrypted,” you write a Rego policy that enforces it.
# policy/s3_encryption.rego
package compliance.s3
# SOC2 CC6.1 — Encryption at rest
deny[msg] {
bucket := input.resource.aws_s3_bucket[name]
not bucket.server_side_encryption_configuration
msg := sprintf("S3 bucket '%s' missing encryption — violates SOC2 CC6.1", [name])
}
# SOC2 CC6.1 — Encryption must use KMS, not AES256
warn[msg] {
bucket := input.resource.aws_s3_bucket[name]
algo := bucket.server_side_encryption_configuration[_].rule[_].apply_server_side_encryption_by_default[_].sse_algorithm
algo == "AES256"
msg := sprintf("S3 bucket '%s' uses AES256 instead of aws:kms — consider upgrading", [name])
}
# SOC2 CC6.6 — Block public access
deny[msg] {
bucket := input.resource.aws_s3_bucket[name]
not input.resource.aws_s3_bucket_public_access_block[name]
msg := sprintf("S3 bucket '%s' missing public access block — violates SOC2 CC6.6", [name])
}Running OPA in CI/CD
# .github/workflows/compliance.yml
name: Compliance Check
on:
pull_request:
paths:
- 'terraform/**'
jobs:
policy-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Terraform Plan
run: |
cd terraform
terraform init
terraform plan -out=tfplan
terraform show -json tfplan > tfplan.json
- name: OPA Policy Check
run: |
curl -L -o opa https://openpolicyagent.org/downloads/latest/opa_linux_amd64
chmod +x opa
./opa eval \
--data policy/ \
--input terraform/tfplan.json \
"data.compliance" \
--format pretty
- name: Conftest (alternative)
uses: open-policy-agent/conftest-action@v2
with:
files: terraform/tfplan.json
policy: policy/AWS Config for Continuous Compliance
AWS Config continuously evaluates your resources against rules. It’s the backbone of automated compliance.
# Terraform — AWS Config rules for SOC2
resource "aws_config_config_rule" "s3_encryption" {
name = "s3-bucket-server-side-encryption-enabled"
source {
owner = "AWS"
source_identifier = "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"
}
tags = {
Compliance = "SOC2-CC6.1"
}
}
resource "aws_config_config_rule" "iam_mfa" {
name = "iam-user-mfa-enabled"
source {
owner = "AWS"
source_identifier = "IAM_USER_MFA_ENABLED"
}
tags = {
Compliance = "SOC2-CC6.1,ISO27001-A.8.5"
}
}
resource "aws_config_config_rule" "rds_encryption" {
name = "rds-storage-encrypted"
source {
owner = "AWS"
source_identifier = "RDS_STORAGE_ENCRYPTED"
}
tags = {
Compliance = "SOC2-CC6.1,ISO27001-A.8.24"
}
}
resource "aws_config_config_rule" "cloudtrail_enabled" {
name = "cloud-trail-cloud-watch-logs-enabled"
source {
owner = "AWS"
source_identifier = "CLOUD_TRAIL_CLOUD_WATCH_LOGS_ENABLED"
}
tags = {
Compliance = "SOC2-CC7.2,ISO27001-A.8.15"
}
}
# Aggregator for multi-account compliance view
resource "aws_config_configuration_aggregator" "org" {
name = "org-compliance-aggregator"
organization_aggregation_source {
all_regions = true
role_arn = aws_iam_role.config_aggregator.arn
}
}Custom Config Rules
For controls that AWS managed rules don’t cover:
# lambda/custom_config_rule.py
# Check that all EC2 instances have the required security tags
import json
import boto3
config = boto3.client('config')
REQUIRED_TAGS = ['Owner', 'DataClassification', 'Environment']
def lambda_handler(event, context):
invoking_event = json.loads(event['invokingEvent'])
configuration_item = invoking_event['configurationItem']
tags = configuration_item.get('tags', {})
missing_tags = [t for t in REQUIRED_TAGS if t not in tags]
if missing_tags:
compliance_type = 'NON_COMPLIANT'
annotation = f"Missing required tags: {', '.join(missing_tags)}"
else:
compliance_type = 'COMPLIANT'
annotation = 'All required tags present'
config.put_evaluations(
Evaluations=[{
'ComplianceResourceType': configuration_item['resourceType'],
'ComplianceResourceId': configuration_item['resourceId'],
'ComplianceType': compliance_type,
'Annotation': annotation,
'OrderingTimestamp': configuration_item['configurationItemCaptureTime']
}],
ResultToken=event['resultToken']
)Evidence Collection Automation
The killer feature of compliance automation: evidence that collects itself.
# scripts/collect_evidence.py
"""
Automated evidence collection for SOC2/ISO27001 audits.
Run monthly via cron or Step Functions.
"""
import boto3
import json
from datetime import datetime
s3 = boto3.client('s3')
config = boto3.client('config')
iam = boto3.client('iam')
EVIDENCE_BUCKET = 'compliance-evidence-2026'
def collect_config_compliance():
"""SOC2 CC7.1 — Configuration management evidence"""
results = config.get_compliance_summary_by_config_rule()
timestamp = datetime.utcnow().strftime('%Y-%m-%d')
s3.put_object(
Bucket=EVIDENCE_BUCKET,
Key=f'config-compliance/{timestamp}.json',
Body=json.dumps(results, default=str),
ServerSideEncryption='aws:kms'
)
def collect_iam_credential_report():
"""SOC2 CC6.1 — Access control evidence"""
iam.generate_credential_report()
response = iam.get_credential_report()
timestamp = datetime.utcnow().strftime('%Y-%m-%d')
s3.put_object(
Bucket=EVIDENCE_BUCKET,
Key=f'iam-credentials/{timestamp}.csv',
Body=response['Content'],
ServerSideEncryption='aws:kms'
)
def collect_encryption_status():
"""SOC2 CC6.1 / ISO27001 A.8.24 — Encryption evidence"""
# Check all S3 buckets
buckets = boto3.client('s3').list_buckets()['Buckets']
evidence = []
for bucket in buckets:
try:
enc = s3.get_bucket_encryption(Bucket=bucket['Name'])
evidence.append({
'bucket': bucket['Name'],
'encrypted': True,
'algorithm': enc['ServerSideEncryptionConfiguration']['Rules'][0]['ApplyServerSideEncryptionByDefault']['SSEAlgorithm']
})
except:
evidence.append({
'bucket': bucket['Name'],
'encrypted': False
})
timestamp = datetime.utcnow().strftime('%Y-%m-%d')
s3.put_object(
Bucket=EVIDENCE_BUCKET,
Key=f'encryption-status/{timestamp}.json',
Body=json.dumps(evidence),
ServerSideEncryption='aws:kms'
)
if __name__ == '__main__':
collect_config_compliance()
collect_iam_credential_report()
collect_encryption_status()
print("Evidence collection complete")Control-to-Infrastructure Mapping
The bridge between compliance and engineering is a control mapping that connects each SOC2/ISO control to specific infrastructure checks.
# compliance/control_mapping.yml
controls:
SOC2_CC6.1_Access_Control:
description: "Logical and physical access controls"
aws_config_rules:
- iam-user-mfa-enabled
- iam-root-access-key-check
- access-keys-rotated
opa_policies:
- policy/iam_least_privilege.rego
evidence:
- iam-credentials/*.csv
owner: security-team
review_frequency: monthly
SOC2_CC6.7_Encryption:
description: "Encryption of data in transit and at rest"
aws_config_rules:
- s3-bucket-server-side-encryption-enabled
- rds-storage-encrypted
- encrypted-volumes
opa_policies:
- policy/s3_encryption.rego
- policy/rds_encryption.rego
evidence:
- encryption-status/*.json
owner: platform-team
review_frequency: monthly
SOC2_CC7.2_Monitoring:
description: "Security event monitoring"
aws_config_rules:
- cloud-trail-cloud-watch-logs-enabled
- guardduty-enabled-centralized
evidence:
- config-compliance/*.json
owner: security-team
review_frequency: weeklyKey Takeaways
- Compliance is code — write policies in OPA/Rego, not spreadsheets
- AWS Config is your compliance engine — 250+ managed rules, custom rules for the rest
- Automate evidence collection — monthly scripts that dump compliance state to S3
- Map controls to infrastructure — every SOC2/ISO control should link to a Config rule or OPA policy
- Make compliance continuous — check on every PR and every day, not once a year
- Tag everything — tags connect resources to controls, owners, and data classification
Compliance automation isn’t just about passing audits faster. It’s about building systems that are secure by default and can prove it at any moment.







