If you are running an AI company, you live by the motto “move fast and ship models.” But when you decide to get ISO 27001 certified, you hit a speed bump: ISO 27001 Annex A 5.4 Management Responsibilities. This control doesn’t care about your latest algorithm; it cares about whether your leadership is actually driving security or just paying lip service to it.
For AI companies, where data is the product and developers often have god-mode access to production, this control is critical. It bridges the gap between “we have a policy” and “our engineers actually follow it.” Here is how to implement Annex A 5.4 without killing your velocity.
Table of contents
The Unique Challenge for AI Companies
In traditional businesses, management responsibility might mean ensuring people lock their filing cabinets. In an AI company, it means ensuring your Data Scientists aren’t downloading sensitive customer PII to their local machines to train a model “just this once.”
Annex A 5.4 requires management to require all personnel to apply information security in accordance with established policies. This is tricky in AI because:
- The boundaries are blurry: Is a model weight file “software” or “data”? Who owns it?
- The culture is open: Research teams are used to sharing everything, which conflicts with the principle of least privilege.
- The speed is high: Security checks can feel like they are slowing down training runs.
Step 1: Define “Management” in a Flat Hierarchy
Many AI startups pride themselves on flat structures. But for ISO 27001, you need clear lines of accountability. You need to define exactly who is responsible for enforcing security policies.
It’s not enough to have a CISO. You need the Head of Engineering and the Head of Data Science to own the security of their teams. If a developer leaves an S3 bucket open, their direct manager must be the one to address it, not just the security team.
Actionable Tip: Update your job descriptions. Ensure that “Adhering to Information Security Policies” is a KPI for your Lead Data Scientists and Engineering Managers. If it’s not in their review, they won’t prioritize it.
Step 2: Make Policies “Code,” Not Just Paper
Annex A 5.4 asks management to ensure policies are applied. In an AI company, the best way to do this is to automate it. Don’t just tell people to secure their code; force it.
- Policy: “No secrets in code.”
Management Action: Mandate pre-commit hooks that scan for API keys. - Policy: “Access control.”
Management Action: Implement automated de-provisioning so that when a contractor leaves, their access to the training cluster is cut instantly.
Management’s role here is to approve the budget and time for these tools. If leadership denies the budget for a scanning tool, they are failing their A 5.4 responsibility.
Step 3: AI-Specific Training and Awareness
Generic security training about “not clicking suspicious links” is boring and often irrelevant to an ML engineer. To fulfill your management responsibilities, you need to provide training that respects their intelligence and role.
Management should ensure training covers:
- Data Poisoning: How to protect training sets from manipulation.
- Model Inversion Attacks: Why we don’t expose raw model endpoints without safeguards.
- Third-Party AI Tools: The policy on pasting proprietary code into public LLMs (e.g., ChatGPT).
If you are struggling to create a competency matrix that covers these niche roles, Hightable.io offers templates that can be customized for high-tech environments, ensuring you don’t miss standard requirements while adding your specific needs.
Step 4: The “Whistleblowing” Channel for Tech Debt
In AI, security risks often look like technical debt. “We hardcoded the credentials because the secrets manager was down.” Management must create a culture where reporting this isn’t punished.
Annex A 5.4 requires a channel for reporting security events. In your context, this means a safe way for a junior dev to say, “I think our training pipeline is insecure,” without fear of being blamed for delaying the launch. Management must prove they listen to and act on these reports.
Step 5: Evidence for the Auditor
When the auditor arrives, they will want to see that your management team is engaged. For an AI company, good evidence looks like:
- Slack/Teams Logs: Screenshots of the CTO reminding the team about a new security protocol.
- Pull Request Reviews: Evidence that code is actually being reviewed for security flaws before merging.
- Town Hall Slides: A slide from your monthly all-hands where the CEO mentions security or privacy as a priority.
- Signed Agreements: Proof that every contractor (even the short-term labeling team) signed a Data Processing Agreement (DPA) and security policy.
Conclusion
For AI companies, ISO 27001 Annex A 5.4 is about maturing your operations. It’s about moving from a group of brilliant individuals to a disciplined organization. It ensures that your groundbreaking technology is built on a foundation that won’t crumble under the first cyberattack.
The key is to integrate these responsibilities into your existing workflows (Jira, GitHub, Slack) rather than creating a separate “compliance layer.” And if you need a head start on the documentation, resources like Hightable.io can provide the framework you need to get compliant fast, letting you get back to building the future.
About the author
Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management.
Holding an MSc in Software and Systems Security, Stuart combines academic rigor with extensive operational experience. His background includes over a decade leading Data Governance for General Electric (GE) across Europe, as well as founding and exiting a successful cyber security consultancy.
As a qualified ISO 27001 Lead Auditor and Lead Implementer, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. He has successfully guided hundreds of organizations – from high-growth technology startups to enterprise financial institutions – through the audit lifecycle.
His toolkits represents the distillation of that field experience into a standardised framework. They move beyond theoretical compliance, providing a pragmatic, auditor-verified methodology designed to satisfy ISO/IEC 27001:2022 while minimising operational friction.

