ISO 27001 Annex A 5.35 Independent review of information security for AI Companies

ISO 27001 Annex A 5.35 for AI Companies

ISO 27001 Annex A 5.35 is a security control that requires an organization’s information security approach to be reviewed by an objective party. This independent review program ensures oversight across complex AI workflows, providing the business benefit of objective assurance for enterprise customers and regulators.

ISO 27001 Annex A 5.35 Independent review of information security requires your organisation’s entire approach to security to be reviewed by an independent party. The purpose is simple: to ensure that your security measures, covering people, processes, and technology, remain suitable, adequate, and effective.

For any business, this is a sensible practice. But for you, as an AI company, this control is far more than a compliance checkbox. In a field defined by rapid innovation and complex data ecosystems, you cannot mark your own homework. The goal is to implement oversight that provides assurance to your customers and partners without encumbering the agile, experimental workflows that drive AI innovation.

The “No-BS” Translation: Decoding the Requirement

Let’s strip away the consultant-speak. Annex A 5.35 is about removing bias. It asks: “Is your security actually good, or do you just think it’s good because you built it?”

The Auditor’s View (ISO 27001)The AI Company View (Reality)
“The organisation’s approach to managing information security… shall be reviewed independently at planned intervals.”Don’t mark your own homework. The DevOps lead who configured the AWS firewall cannot be the one to audit it. They will miss their own mistakes. You need a fresh pair of eyes (internal or external).
“Review shall verify that the activities… are suitable, adequate and effective.”Does it actually work? Don’t just check if the policy exists (paper exercise). Check if the policy stops a hacker. If you have a password policy but your CEO uses “Password123”, the control is not effective.

The Business Case: Why This Actually Matters for AI Companies

Why should a founder care about “Independent Review”? Because groupthink kills security.

The Sales Angle

Enterprise clients will ask: “When was your last external penetration test or security audit?” If your answer is “We check it ourselves,” you look amateur. If your answer is “We undergo quarterly independent reviews and annual third-party audits, and here is the summary report,” you close the deal. A 5.35 builds credibility.

The Risk Angle

The “Configuration Drift” Risk: You set up your cloud environment perfectly on Day 1. Six months later, 5 developers have made “temporary fixes” that are now permanent holes. You won’t see them because you are too close to the code. An independent review spots the drift before an attacker does.

DORA, NIS2 and AI Regulation: The Audit Mandate

Regulators don’t trust you to audit yourself.

  • DORA (Article 6): Financial entities must have an “internal governance and control framework” that is subject to independent review. If you sell AI to banks, they will enforce this on you.
  • NIS2 Directive: Mandates regular auditing of security measures. You must prove your controls are effective, not just documented.
  • EU AI Act: High-risk AI systems require “Conformity Assessments.” This is essentially a massive independent review of your technical documentation, risk management, and data governance. A 5.35 sets the stage for this.

ISO 27001 Toolkit vs SaaS Platforms: The Audit Trap

SaaS platforms automate evidence collection, but they cannot perform an independent review. A script can tell you if a port is open; it cannot tell you if your risk appetite is appropriate. Here is why the ISO 27001 Toolkit is superior.

FeatureISO 27001 Toolkit (Hightable.io)Online SaaS Platform
The ReviewAudit Protocols. Step-by-step guides on how to audit. “Interview the CTO and ask X.”Automated Checks. Checks configurations (AWS Config), but misses the human element. It can’t audit your HR onboarding process effectively.
OwnershipYour Audit Programme. You define the schedule and scope. You keep the reports.Black Box. The platform gives you a “score,” but you don’t own the underlying audit methodology. If you leave, you lose your audit history.
SimplicityChecklists. Simple documents for Internal Auditors to follow.Alert Fatigue. Platforms flag thousands of “issues” (like a missing tag) that aren’t real risks, burying the actual problems.
CostOne-off fee. Pay once. Audit forever.Continuous Cost. You pay monthly for the “monitoring” feature, even if you only review it quarterly.

The AI Challenge: Why Independent Reviews are Different for You

Standard security audits often miss the nuances of AI. A generalist auditor might check your laptop encryption but miss the fact that you are pasting customer data into ChatGPT.

Protecting Your Crown Jewels: Securing Training Data

An independent review must assess how training data is protected. The challenge is providing access for a reviewer without exposing the IP. You need a reviewer who understands “Data Clean Rooms” and doesn’t ask you to email them a copy of the dataset.

Maintaining Model Integrity: Algorithmic Processes Under Scrutiny

Your AI models are not static. An improperly scoped review (e.g., aggressive penetration testing on a production model) could cause model drift or downtime. The review must be planned to validate security without breaking the inference engine.

The AI Supply Chain: A New Frontier for Vulnerabilities

Your review must extend to your supply chain (Hugging Face, OpenAI APIs). Are you reviewing the security of the libraries you import? A standard audit might miss the risk of a compromised PyTorch dependency.

Your Blueprint for Compliance: Practical Steps for AI Businesses

Successfully implementing Annex A 5.35 requires a structured plan.

Establishing Your Review Programme

Define two types of review:

  • Planned Reviews: Scheduled (e.g., Annual Internal Audit) covering the whole ISMS.
  • Trigger-Based Reviews: Ad-hoc reviews prompted by major changes (e.g., deploying a new Foundation Model).

Selecting the Right Reviewer

Objectivity is key. You cannot audit your own work.

Suitable ReviewersUnsuitable Reviewers
External Consultant / Pen Tester.The CISO (reviewing their own strategy).
Head of Operations (reviewing Engineering).The Lead Developer (reviewing their own code).
Internal Audit Team (if you have one).Anyone who reports to the person being audited.

The Evidence Locker: What the Auditor Needs to See

When the external auditor comes, they want to see that you have been checking yourself.

  • Internal Audit Schedule (PDF): A calendar showing when reviews happen.
  • Audit Reports (PDF): The actual findings. “We checked X and found Y.”
  • Non-Conformity Reports (NCRs): Evidence that you found problems. (Auditors like seeing these; it proves you are looking).
  • Management Review Minutes: Proof that the results were shown to the CEO/Board.

Common Pitfalls & Auditor Traps

Here are the top 3 ways AI companies fail this control:

  • The “Mate’s Review”: The CTO asks the VP of Engineering to audit them. They go for a beer and sign it off. No real checking happens. Lack of rigour.
  • The “Checkbox” Audit: The reviewer uses a generic template and ticks “Yes” to everything without looking at the evidence.
  • The “Ignored Report”: You paid for a Penetration Test, it found 5 critical issues, and you put the PDF in a drawer and did nothing. This is worse than not doing the test.

Handling Exceptions: The “Break Glass” Protocol

Sometimes an auditor cannot see everything (e.g., highly sensitive PII).

The Audit Scope Limitation Workflow:

  • Constraint: Auditor requests access to raw patient data training set.
  • Restriction: Access denied due to HIPAA/GDPR privacy rules.
  • Alternative: Provide “Sampled” or “Synthetic” data, or walk them through the process via screen share without granting direct access.
  • Document: Note the scope limitation in the Audit Report.

The Process Layer: “The Standard Operating Procedure (SOP)”

How to operationalise A 5.35 using your existing stack (Linear, Google Drive).

  • Step 1: Schedule (Automated). Recurring task in Linear: “Quarterly Access Review” or “Annual Internal Audit.”
  • Step 2: Assign (Manual). Assign the ticket to someone outside the team being reviewed.
  • Step 3: Execute (Manual). Reviewer uses the High Table Audit Checklist to test controls.
  • Step 4: Report (Manual). Reviewer creates a “Findings Report” in Google Drive.
  • Step 5: Remediate (Automated). Findings are turned into new Linear tickets (“Fix bug X”) and tracked to closure.

For an innovative AI company, implementing a robust process for independent security reviews is a strategic imperative. By leveraging a practical resource like the High Table ISO 27001 Toolkit, you can transform this complex requirement into a manageable business process.

ISO 27001 Annex A 5.35 for AI Companies FAQ

What is ISO 27001 Annex A 5.35 for AI companies?

ISO 27001 Annex A 5.35 requires that an organisation’s approach to managing information security and its implementation be reviewed independently at planned intervals. For AI companies, this means 100% of critical ML workflows and automated decision systems must undergo objective audits to ensure security policies remain effective against evolving adversarial threats.

Who should conduct the independent review for AI firms?

Reviews must be conducted by individuals independent of the area being audited, such as an internal audit team or an external specialist. In AI firms, the reviewer must possess specific technical competency in machine learning security to effectively assess high-complexity environments, including vector databases, GPU clusters, and model training pipelines.

What are the technical requirements for reviewing AI security under Annex A 5.35?

The independent review must assess the management of technical risks specific to AI infrastructure. Key compliance focus areas for the review process include:

  • Validation of access controls and encryption for 100% of proprietary training datasets.
  • Auditing the security and data-handling protocols of third-party LLM API integrations.
  • Reviewing the robustness of inference endpoints against prompt injection and model extraction attacks.

How frequent are independent reviews for AI organisations?

Independent reviews should occur at least annually or whenever significant changes to the AI architecture occur. Because roughly 70% of AI firms iterate models weekly, a quarterly “deep-dive” review or a continuous auditing programme is recommended to ensure the Information Security Management System (ISMS) keeps pace with rapid technological deployments.

Why is Annex A 5.35 critical for AI model trust?

Annex A 5.35 provides objective evidence that security controls are functioning as intended, which is vital for building stakeholder trust. By validating that independent checks reduce the probability of model poisoning by up to 50%, this control ensures the AI system remains reliable, transparent, and compliant with global standards like the EU AI Act.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top