ISO 27001:2022 Clause 5.3 Organisational Roles, Responsibilities and Authorities for AI Companies

ISO 27001 Clause 5.3 For AI Companies 2026

ISO 27001 Annex A 5.3 is a security control that mandates the formal assignment and communication of organizational roles and authorities for information security. It requires top management to assign specific responsibilities for all security tasks to ensure accountability. This provides the Business Benefit of closing governance gaps and accelerating enterprise sales cycles.

Look, your AI company is moving at terminal velocity. You are shipping code, training models, and trying to win the race to AGI. But in the cold light of an audit, your “groundbreaking” LLM is just another piece of software that needs a gatekeeper. Regulators and enterprise clients do not care about your weights and biases if they cannot find out who is actually responsible for the security of the data training them.

This is where ISO 27001 Clause 5.3 stops being a paperwork exercise and starts being your shield. It is about accountability. It ensures that when things go sideways, there is a name on the ticket. For an AI-driven business, ignoring Clause 5.3 is the fastest way to fail a Stage 2 audit and lose that six-figure enterprise contract. You cannot outsource accountability to a SaaS platform dashboard: you have to own it.


The “No-BS” Translation: Decoding the Requirement

The official ISO 27001 text says: “Top management shall ensure that the responsibilities and authorities for roles relevant to information security are assigned and communicated within the organisation.”

The Auditor’s View (The Jargon)The AI Company View (The Reality)
Information Processing FacilitiesYour AWS/GCP clusters, MacBooks, and the GitHub repos holding your model weights.
Top Management AuthorityThe CEO and CTO actually giving a damn and putting their names on the security policy.
Communicate within the organisationPutting a “Security Roles” table in your Notion or Confluence and making sure everyone on Slack knows who to ping.
Assigned and DocumentedNot just “we all do security,” but a CSV or Word doc that says “Dave owns the AWS IAM keys.”

The Business Case: Why This Actually Matters

Compliance is often seen as a handbrake, but for AI companies, it is a sales accelerator. If you delete this control, here is what happens to your bank account:

  • The Sales Angle: When a Tier-1 Bank looks at your security questionnaire and asks “Who is responsible for model integrity?” and your answer is “The DevOps team generally,” you lose the deal. They want a specific CISO or Security Lead who has the authority to pull the plug if a data leak occurs.
  • The Risk Angle: Without clear roles, you get “Shadow IT.” A researcher spins up a massive GPU instance on a personal credit card to test a new model, leaks training data, and because no one was “assigned” to monitor cloud spend or security, you find out three months later when the data is on a dark web forum.
  • The Nightmare Scenario: A developer leaves on bad terms. Because nobody was formally responsible for the “Offboarding Process,” their access to the production weights remains active. They clone your proprietary model and start a competitor.

Why the ISO 27001 Toolkit Beats SaaS Platforms

Many AI startups fall for the “compliance automation” trap. They pay £10k a year for a shiny dashboard that creates a false sense of security. Here is why a proper toolkit is superior:

FeatureISO 27001 Toolkit (HighTable)Expensive SaaS GRC Platforms
OwnershipYou own the files forever. They are your IP.You rent your compliance. Stop paying, lose your data.
SimplicityUses Word and Excel. Everyone knows how to use them.Requires hours of training just to navigate the UI.
CostOne-off fee. No “per user” tax.Heavy monthly subscriptions that never end.
FreedomNo vendor lock-in. Move your docs anywhere.Your entire ISMS is trapped in their proprietary ecosystem.
Auditor TrustShows you actually thought about and wrote your processes.Often looks like a “copy-paste” job, triggering deeper scrutiny.

DORA, NIS2, and the EU AI Act

Clause 5.3 is the “Master Key” for other regulations:

  • DORA (Digital Operational Resilience Act): Mandates that the Management Body is ultimately responsible for ICT risk. You cannot meet DORA requirements if your ISO 27001 roles are not defined.
  • NIS2: Requires “Management bodies” to approve risk management measures and follow training. Clause 5.3 is the evidence that your management is engaged.
  • EU AI Act: Requires clear “Provider” responsibilities, especially for high-risk AI. Mapping these to your ISO roles ensures you aren’t doing the work twice.

The Key Players: Your AI Security Team

In an AI company, your roles look a bit different:

  • The CEO (Top Management): Sets the budget. If the CISO says we need £50k for model monitoring and the CEO says no, that is a documented management decision.
  • The MLOps/Security Lead (Information Security Manager): The person who actually knows how the pipelines work and ensures the ISMS isn’t just a dead PDF.
  • The Data Custodian: A critical AI role. Responsible for the “cleanliness” and security of the training datasets.
  • The Management Review Team: CTO, CEO, and Head of Product. They meet to decide if the current risk to the company intellectual property is acceptable.

The Evidence Locker: What the Auditor Needs

Don’t panic during audit week. Have these ready:

  1. The Roles and Responsibilities Document: A formal PDF from your toolkit listing every ISO role and the name of the person currently holding it.
  2. The RASCI Matrix: An Excel sheet showing who is Responsible and Accountable for every control (e.g., Who patches the GPU drivers? Who reviews the GitHub logs?).
  3. Board Meeting Minutes: Proof that the “Top Management” actually met and talked about security, not just “growth at all costs.”
  4. Job Descriptions: 2 or 3 samples of employee contracts or JDs that mention “security responsibilities” as part of their job.

Common Pitfalls and Auditor Traps

I have failed companies for these three things more than anything else:

  • The “Zombie” Admin: Your documentation says “Sarah is the Security Manager,” but Sarah left the company six months ago. This is an instant non-conformity.
  • The “Shadow” SaaS: You use a GRC platform that says you are 100% compliant, but the auditor asks your lead dev who is responsible for “Annex A 8.1 User Endpoint Security” and they say “I don’t know, ask the platform.” That is a fail.
  • Zero Authority: You appoint a junior dev as the “Security Lead” but give them zero power to change firewalls or block a deployment. Auditors will see through this “token” appointment in minutes.

Handling Exceptions: The “Break Glass” Protocol

In AI, sometimes you need to bypass a control to fix a P0 production crash.

  • The Emergency Path: Use a “Break Glass” account (like the AWS Root user) that is locked in a digital vault (e.g., 1Password).
  • The Paper Trail: Any use of emergency authority MUST trigger a retrospective Linear or Jira ticket.
  • Time Limits: Emergency access is granted for a specific window (e.g., 4 hours). After that, the “Authority” expires.

The Process Layer: Standard Operating Procedure

How to actually run Clause 5.3 daily:

  • Onboarding: New hire joins -> Assigned a role in the RASCI matrix -> Signs the “Acceptable Use Policy.”
  • Maintenance: Quarterly “Role Review” meeting. Check if the people listed in the doc still work here and still have the same job.
  • Offboarding: Revoke access based on the role. If the “IAM Admin” leaves, rotate the keys immediately.

ISO 27001 Clause 5.3 FAQ

What is ISO 27001 Annex A 5.3 for AI companies?

ISO 27001 Annex A 5.3 requires AI companies to separate conflicting duties to prevent fraud or errors. In AI development, this means ensuring that the individual who develops a model is not the same person who authorises its deployment into production, reducing internal risk by approximately 40%.

How does Segregation of Duties apply to AI data pipelines?

AI data pipelines must separate data ingestion, labelling, and training roles. By ensuring that data scientists do not have administrative access to live production databases, firms can prevent 100% of unauthorised training data modifications. This separation is critical for maintaining model integrity and audit trails.

Why is Annex A 5.3 vital for compliance with the EU AI Act?

Annex A 5.3 satisfies the governance and transparency requirements of the EU AI Act. Article 15 mandates robustness and accuracy; segregating development from validation ensures 100% objective testing. Firms without these controls face fines of up to €35 million or 7% of global turnover for non-compliance.

Can small AI startups implement Segregation of Duties?

Small AI startups can implement segregation through independent reviews or automated CI/CD gateways. While headcount may be limited, 100% compliance is achievable by using mandatory peer reviews in GitHub or GitLab. This technical segregation acts as a virtual wall, preventing a single point of failure in the security chain.

What evidence do auditors look for in AI Segregation of Duties?

Auditors require objective evidence of distinct roles and approval logs. You must provide a RACI matrix and deployment logs showing that production merges were authorised by a second party. Failure to provide this evidence accounts for roughly 15% of minor non-conformities during ISO 27001 Stage 2 audits.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top