ISO 27001 Annex A 5.36 is a security control that mandates the regular review of organizational operations against established information security policies. This formalized oversight mechanism identifies policy deviations and technical configuration drift, providing the business benefit of reduced security unforced errors and verifiable trust for enterprise stakeholders.
In information security, the gap between knowing the rules and actually following them is where risk thrives. ISO 27001 Annex A 5.36 Compliance with policies and standards for information security is the primary control designed to close this “knowing-doing gap.” It transforms security policies from static documents into living, breathing habits that protect an organisation daily.
For fast-moving AI companies, where data is the lifeblood and innovation is constant, mastering this control is not a bureaucratic hurdle – it is an essential operational practice for building client trust, demonstrating resilience, and turning compliance into a competitive advantage. This guide provides a clear, practical roadmap for implementation.
Table of contents
- The “No-BS” Translation: Decoding the Requirement
- The Business Case: Why This Actually Matters for AI Companies
- DORA, NIS2 and AI Regulation: Check Your Work
- ISO 27001 Toolkit vs SaaS Platforms: The Compliance Trap
- The Unique Compliance Challenges for AI Companies
- Your Practical Steps to Compliance
- The Evidence Locker: What the Auditor Needs to See
- Common Pitfalls & Auditor Traps
- Handling Exceptions: The “Break Glass” Protocol
- The Process Layer: “The Standard Operating Procedure (SOP)”
The “No-BS” Translation: Decoding the Requirement
Let’s strip away the consultant-speak. Annex A 5.36 asks: “Do people actually follow the rules you wrote down?”
| The Auditor’s View (ISO 27001) | The AI Company View (Reality) |
|---|---|
| “Compliance with the organisation’s information security policy… shall be regularly reviewed.” | Spot checks. Go to a developer’s desk. Ask: “Is your hard drive encrypted?” Check it. If it is, great. If not, your policy is a lie. Go to HR. Ask: “Do you screen candidates before hiring?” Check the last 3 hires. If the background check is missing, you are non-compliant. |
The Business Case: Why This Actually Matters for AI Companies
Why should a founder care about “Policy Compliance”? Because unforced errors kill deals.
The Sales Angle
Enterprise clients will ask: “How do you ensure your employees follow security procedures?” If your answer is “We trust them,” you lose the deal. If your answer is “We conduct monthly compliance spot checks and enforce disciplinary action for repeated violations,” you win trust. A 5.36 proves you are running a disciplined ship.
The Risk Angle
The “Shadow IT” Leak: You have a policy against using free ChatGPT for code. But your devs are using it anyway because nobody checks. One day, proprietary code leaks. A 5.36 check would have caught this behaviour early (e.g., by reviewing web logs).
DORA, NIS2 and AI Regulation: Check Your Work
Regulators don’t care about your policy document. They care about reality.
- DORA (Article 5): Requires financial entities to implement and review their internal governance framework. If you have a policy but don’t follow it, you are in breach of governance requirements.
- NIS2 Directive: Mandates that management bodies approve and oversee cybersecurity measures. Oversight means checking if the measures are actually working.
- EU AI Act: Providers must have a Quality Management System (QMS). A key part of QMS is verifying that processes are followed (e.g., data quality checks). If you claim to check for bias but never actually do it, you are non-compliant.
ISO 27001 Toolkit vs SaaS Platforms: The Compliance Trap
SaaS platforms check configurations, but they can’t check human behaviour. Here is why the ISO 27001 Toolkit is superior.
| Feature | ISO 27001 Toolkit (Hightable.io) | Online SaaS Platform |
|---|---|---|
| The Check | Human Verification. Templates for “Clean Desk Audits,” “HR File Reviews,” and “Physical Security Walkthroughs.” | API Checks Only. Platforms check if MFA is on. They can’t check if an employee wrote their password on a post-it note. |
| Ownership | Manager Accountability. Managers sign off on their team’s compliance. It builds culture. | Automated Pass/Fail. The platform gives a green tick, so managers stop caring. Security becomes “IT’s problem,” not everyone’s problem. |
| Simplicity | Spot Check Logs. Simple spreadsheets to record “Date checked: [Date]. Result: Pass.” | Dashboard Fatigue. Thousands of “failed checks” (e.g., one person missed a training deadline) create noise that hides real risks. |
| Cost | One-off fee. Pay once. Audit yourself forever. | Subscription. You pay for a “Compliance Monitor” that misses 50% of the real-world risks (the human ones). |
The Unique Compliance Challenges for AI Companies
The rapid pace of AI development creates dangerous gaps between policy and practice.
The Risk of Dormant Policies in a Fast-Paced Environment
New models emerge in weeks. If your policy says “All models must be approved by the AI Ethics Board,” but the board meets quarterly, devs will bypass it. A 5.36 review catches this misalignment: “We found 3 unapproved models in production. Why?”
The Risk of Scattered Evidence Across Complex Workflows
Gathering evidence from fragmented AI workflows (data ingestion, training, inference) is hard. When an audit looms, you scramble. Regular compliance checks force teams to file evidence (e.g., training logs) as they go, preventing audit panic.
The Risk of Compliance Fatigue Hindering Innovation
For R&D teams, compliance feels like a distraction. If checks are too heavy, they get ignored. A 5.36 review identifies where the process is broken so you can fix it (e.g., automate the check instead of asking for a manual form).
Your Practical Steps to Compliance
Achieving compliance is about embedding repeatable routines.
Establish a Formal Review Process
Document how you check. “Quarterly, the Security Officer will sample 10 laptops for encryption compliance.”
Assign Clear Ownership
Managers must own compliance. The Engineering Lead is responsible for ensuring devs don’t use unapproved libraries. The Sales VP is responsible for ensuring sales reps lock their screens.
Manage Non-Compliance Effectively
When you find a violation, fix it. Don’t just ignore it. Log a “Non-Conformity” (NC) and assign a corrective action. This proves the system works.
The Evidence Locker: What the Auditor Needs to See
When the audit comes, prepare these artifacts:
- Compliance Review Schedule (PDF): A calendar of planned checks (e.g., Q1: HR, Q2: IT, Q3: Physical).
- Spot Check Logs (Excel): Records of the actual checks. “Checked 5 desks. Found 1 unlocked laptop. User reminded.”
- Access Review Reports (PDF): Evidence that you checked user access rights (A 5.18) against the policy.
- Non-Conformity Reports (NCRs): Evidence that you found issues and fixed them. A perfect record with zero issues looks suspicious.
Common Pitfalls & Auditor Traps
Here are the top 3 ways AI companies fail this control:
- The “Paper Tiger” Policy: You have a perfect policy document that says “Password must be changed every 90 days,” but the system config is set to “Never expires.” The auditor checks the config and fails you.
- The “Exception” Culture: You check compliance, but everyone has an “exception.” The CEO is exempt from MFA. The devs are exempt from VPN. If everyone is exempt, there is no policy.
- The “Zero Findings” Audit: You claim to do regular reviews but have never found a single issue. This suggests you aren’t looking hard enough.
Handling Exceptions: The “Break Glass” Protocol
Sometimes you have to break policy (e.g., to fix a production outage).
The Policy Waiver Workflow:
- Trigger: Production emergency requires disabling WAF rules.
- Approval: CISO approves temporary waiver (24 hours).
- Documentation: Waiver logged in “Exception Register.”
- Review: Compliance check verifies the waiver was closed and policy re-applied.
The Process Layer: “The Standard Operating Procedure (SOP)”
How to operationalise A 5.36 using your existing stack (Linear, Slack).
- Step 1: Schedule (Automated). Recurring Linear ticket: “Monthly Security Spot Check.”
- Step 2: Check (Manual). Security Officer walks the floor (or checks Slack logs) for violations.
- Step 3: Record (Manual). Log findings in the ticket comments.
- Step 4: Remediate (Automated). If violation found, create a sub-ticket for the user: “Please fix [Issue] by [Date].”
- Step 5: Report (Manual). Monthly summary to the Board: “98% Compliance Rate.”
By turning compliance into a proven, daily habit, you can innovate with confidence. The High Table ISO 27001 Toolkit provides the practical foundation for achieving this, turning a complex requirement into a clear business process.
ISO 27001 Annex A 5.36 for AI Companies FAQ
What is ISO 27001 Annex A 5.36 for AI companies?
ISO 27001 Annex A 5.36 requires AI companies to regularly review 100% of their technical systems and operations against documented security policies and standards. This control ensures that ML pipelines, model architectures, and data handling processes remain compliant with internal governance and external regulatory frameworks, effectively eliminating policy-deviation risks.
How do AI firms verify technical compliance with security policies?
AI firms verify compliance by implementing automated configuration audits and periodic manual spot-checks. To satisfy Annex A 5.36, organisations should run automated scans across 100% of their GPU clusters and Kubernetes environments, reducing the risk of security configuration drift by an estimated 65% through continuous monitoring and remediation.
What are the penalties for non-compliance with security standards in AI?
Non-compliance with Annex A 5.36 can lead to the immediate suspension of ISO 27001 certification. Furthermore, failing to adhere to security standards mandated by the EU AI Act can result in catastrophic financial penalties reaching €35 million or 7% of global annual turnover, whichever is higher, for high-risk AI system providers.
How does Annex A 5.36 integrate with AI-specific standards like ISO 42001?
Annex A 5.36 serves as the foundational oversight mechanism for the ISO/IEC 42001 Artificial Intelligence Management System (AIMS). By aligning these standards, AI companies can establish a unified compliance dashboard that covers 100% of information security and AI safety requirements, typically streamlining the audit preparation process by up to 40%.
What evidence is required to prove compliance with AI security policies?
To satisfy an ISO 27001 auditor for Annex A 5.36, AI companies must provide the following technical artifacts:
- Compliance Monitoring Reports: Documented logs from automated system configuration audits.
- Policy Deviation Logs: A record of any instances where ML workflows drifted from the AI Security Policy.
- Remediation Records: Proof of technical changes made to bring non-compliant GPU or data storage environments back into alignment.