ISO 27001:2022 Annex A 5.4 Management responsibilities for AI Companies

ISO 27001 Annex A 5.4 for AI Companies

ISO 27001 Annex A 5.4 Management Responsibilities is a security control that requires leadership to mandate and enforce security protocols across all personnel levels. The Primary Implementation Requirement involves documenting governance oversight, while the Business Benefit ensures valuation protection and enterprise-grade trust during high-stakes AI sales cycles.

If you are running an AI company, you live by the motto “move fast and ship models.” But when you decide to get ISO 27001 certified, you hit a speed bump: ISO 27001 Annex A 5.4 Management Responsibilities. This control doesn’t care about your latest algorithm; it cares about whether your leadership is actually driving security or just paying lip service to it.

For AI companies, where data is the product and developers often have god-mode access to production, this control is critical. It bridges the gap between “we have a policy” and “our engineers actually follow it.” Here is how to implement Annex A 5.4 without killing your velocity.

The “No-BS” Translation: Decoding the Requirement

Let’s strip away the consultant-speak and look at what this actually means for a 25-year-old DevOps engineer or a Lead Data Scientist.

The Auditor’s View (ISO 27001)The AI Company View (Reality)
“Management shall require all personnel to apply information security in accordance with the established information security policy.”The CTO stops the Junior Dev from pushing AWS keys to a public GitHub repo. If they do it anyway, there is a consequence, not just a shrug.
“Management shall ensure that personnel are aware of their information security responsibilities.”You cannot just email a PDF and hope they read it. You need to verify that your ML researchers know why they cannot upload customer data to ChatGPT.
“Management shall ensure that information security policies are enforced.”If the Head of Engineering bypasses MFA because “it’s annoying,” you have failed. Rules apply to everyone, especially the founders.
ISO 27001 Toolkit

The Business Case: Why This Actually Matters for AI Companies

Management Responsibility isn’t about micromanagement; it is about protecting your valuation. If you delete this control, here is the nightmare scenario.

The Sales Angle

Enterprise buyers know that AI startups are chaotic. When they ask, “How does management oversee security compliance?” in a questionnaire, they are looking for maturity. If your answer is “We trust our devs,” you will be flagged as high risk. If your answer is “Our Head of Engineering reviews security KPIs monthly and enforces a zero-tolerance policy for data leaks,” you close the deal.

The Risk Angle

Insider Threat & Negligence: In an AI company, a disgruntled employee with access to the model weights can destroy your competitive advantage in seconds. Annex A 5.4 ensures that management has the levers to enforce “Least Privilege” and monitor behaviour, preventing data exfiltration before it happens.

DORA, NIS2 and AI Regulation: The Accountability Hammer

If you think ISO 27001 is strict, the new EU regulations are a wake-up call. Annex A 5.4 is your practice run for personal liability.

  • DORA (Digital Operational Resilience Act): Article 5 explicitly states that the “Management Body” is ultimately responsible for ICT risk. You cannot delegate this. If you don’t have the evidence required by A 5.4, your board is non-compliant with EU law.
  • NIS2 Directive: Article 20 holds management bodies personally liable for gross negligence in cybersecurity. A 5.4 provides the governance structure to prove you were not negligent.
  • EU AI Act: Requires “Human Oversight” of AI systems. A 5.4 establishes the chain of command required to demonstrate that a human—not a black-box algorithm—is ultimately in charge of data security.

ISO 27001 Toolkit vs SaaS Platforms: The Management Trap

SaaS platforms love to “automate” this control by sending generic reminder emails. But an email is not management. Here is why the ISO 27001 Toolkit is superior for demonstrating actual leadership.

FeatureISO 27001 Toolkit (Hightable.io)Online SaaS Platform
OwnershipYou define the culture. You edit the “Roles & Responsibilities” document to fit your unique AI team structure.You rent a checkbox. The platform sends a “Please read” email. Staff click “OK” without reading. That isn’t management; it’s spam.
SimplicityDocuments you understand. A clear “Job Description” template in Word that HR can actually use.Hidden logic. The platform marks the control as “Passing” because 80% of staff clicked a button, masking the fact that your Lead Dev ignores security.
CostOne-off fee. Pay once, own your governance forever.Expensive subscriptions. You pay monthly for a bot to nag your staff. You could do that yourself for free.
FreedomNo Vendor Lock-in. Your management policies sit in your Google Drive/SharePoint, accessible to everyone, forever.Data Hostage. If you leave the platform, you lose the audit logs proving management oversight.

The Unique Challenge for AI Companies

In traditional businesses, management responsibility might mean ensuring people lock their filing cabinets. In an AI company, it means ensuring your Data Scientists aren’t downloading sensitive customer PII to their local machines to train a model “just this once.”

Annex A 5.4 requires management to require all personnel to apply information security in accordance with established policies. This is tricky in AI because:

  • The boundaries are blurry: Is a model weight file “software” or “data”? Who owns it?
  • The culture is open: Research teams are used to sharing everything, which conflicts with the principle of least privilege.
  • The speed is high: Security checks can feel like they are slowing down training runs.

Step 1: Define “Management” in a Flat Hierarchy

Many AI startups pride themselves on flat structures. But for ISO 27001, you need clear lines of accountability. You need to define exactly who is responsible for enforcing security policies.

It’s not enough to have a CISO. You need the Head of Engineering and the Head of Data Science to own the security of their teams. If a developer leaves an S3 bucket open, their direct manager must be the one to address it, not just the security team.

Actionable Tip: Update your job descriptions. Ensure that “Adhering to Information Security Policies” is a KPI for your Lead Data Scientists and Engineering Managers. If it’s not in their review, they won’t prioritise it.

Step 2: Make Policies “Code,” Not Just Paper

Annex A 5.4 asks management to ensure policies are applied. In an AI company, the best way to do this is to automate it. Don’t just tell people to secure their code; force it.

  • Policy: “No secrets in code.”


    Management Action: Mandate pre-commit hooks that scan for API keys.
  • Policy: “Access control.”


    Management Action: Implement automated de-provisioning so that when a contractor leaves, their access to the training cluster is cut instantly.

Management’s role here is to approve the budget and time for these tools. If leadership denies the budget for a scanning tool, they are failing their A 5.4 responsibility.

Step 3: AI-Specific Training and Awareness

Generic security training about “not clicking suspicious links” is boring and often irrelevant to an ML engineer. To fulfill your management responsibilities, you need to provide training that respects their intelligence and role.

Management should ensure training covers:

  • Data Poisoning: How to protect training sets from manipulation.
  • Model Inversion Attacks: Why we don’t expose raw model endpoints without safeguards.
  • Third-Party AI Tools: The policy on pasting proprietary code into public LLMs (e.g., ChatGPT).

If you are struggling to create a competency matrix that covers these niche roles, Hightable.io offers templates that can be customised for high-tech environments, ensuring you don’t miss standard requirements while adding your specific needs.

Step 4: The “Whistleblowing” Channel for Tech Debt

In AI, security risks often look like technical debt. “We hardcoded the credentials because the secrets manager was down.” Management must create a culture where reporting this isn’t punished.

Annex A 5.4 requires a channel for reporting security events. In your context, this means a safe way for a junior dev to say, “I think our training pipeline is insecure,” without fear of being blamed for delaying the launch. Management must prove they listen to and act on these reports.

The Evidence Locker: What the Auditor Needs to See

When the auditor arrives, they will want to see that your management team is engaged. For an AI company, good evidence looks like:

  • Slack/Teams Logs: Screenshots of the CTO reminding the team about a new security protocol (e.g., “Team, remember to enable MFA on the new AWS accounts”).
  • Pull Request Reviews: Evidence that code is actually being reviewed for security flaws before merging, enforced by Engineering Managers.
  • Town Hall Slides: A slide from your monthly all-hands where the CEO mentions security or privacy as a priority.
  • Signed Agreements (DPAs): Proof that every contractor (even the short-term labelling team) signed a Data Processing Agreement (DPA) and security policy.
  • Job Descriptions: Updated PDFs showing that “Information Security” is a defined responsibility for senior roles.

Common Pitfalls & Auditor Traps

Avoid these mistakes that typically lead to a Non-Conformity during Stage 2 audits.

  • The “Copy-Paste” Error: You downloaded a policy that refers to “clean desk” and “locking filing cabinets” but you are a remote-first AI company. It proves management hasn’t even read the policy.
  • The “Do as I Say, Not as I Do” Error: The policy requires MFA, but the CEO has it disabled because it’s “inconvenient.” The auditor will check executive accounts first.
  • The “Shadow IT” Gap: Your policy says “All software must be approved,” but your Data Science team is using 5 different unauthorised AI tools. Management is failing to enforce the rules.

Handling Exceptions: The “Break Glass” Protocol

Sometimes, production breaks at 3 AM and you need to bypass standard controls. Management responsibility involves defining how this happens safely.

The Emergency Workflow:

  • Trigger: P0 Incident requiring Root/Admin access.
  • Approval: CTO or CISO gives verbal/Slack approval.
  • Documentation: A retroactive ticket is logged in Linear/Jira tagged “Emergency Access.”
  • Review: The incident and the access log are reviewed in the next Management Meeting. This proves oversight.

The Process Layer: “The Standard Operating Procedure (SOP)”

How to operationalise A 5.4 using your existing stack (Google Workspace, AWS, Linear).

  • Step 1: Onboarding (Automated). Use Google Workspace integration to force new hires to sign the Acceptable Use Policy before they get email access.
  • Step 2: Regular Briefing (Manual). Add “Security Update” as a standing agenda item in the weekly Engineering Standup. Note it in the meeting minutes.
  • Step 3: Enforcement (Automated). Configure AWS IAM policies to alert via Slack if a user creates an overly permissive role (e.g., “AdministratorAccess”).
  • Step 4: Offboarding (Manual). When a user is removed from Google Workspace, a Linear ticket is automatically created for the Engineering Lead to verify revocation of specific API keys.

For AI companies, ISO 27001 Annex A 5.4 is about maturing your operations. It’s about moving from a group of brilliant individuals to a disciplined organisation. It ensures that your groundbreaking technology is built on a foundation that won’t crumble under the first cyberattack.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top