ISO 27001:2022 Annex A 5.1 Policies for information security for AI Companies

ISO 27001 Annex a 5.1 for AI companies 2026

ISO 27001 Annex A 5.1 is the cornerstone security control for Policies for Information Security, requiring management to define, approve, and publish a set of rules that govern data protection. For AI companies, this control is the Primary Implementation Requirement to demonstrate governance over high-risk models and training data, providing the Business Benefit of unlocking enterprise sales and satisfying EU AI Act mandates.

Information security policies are the foundation of any robust Information Security Management System (ISMS). They are the formal statements that articulate management’s intent, direction, and support for protecting your organisation’s valuable data. This guide is designed to break down the requirements of ISO 27001 Annex A 5.1 specifically for AI companies. This is your core control that provides the framework for all your security efforts.

For AI companies, this is not a tick-box exercise. You are handling vast amounts of sensitive training data, proprietary model weights, and GPU infrastructure. Establishing clear, comprehensive policies is the only way to build trust with enterprise clients who are terrified of their data leaking into your public model.

By reading this guide, you will gain a practical understanding of:

  • Why renting your policies from a SaaS platform is a strategic error.
  • How to write policies that satisfy auditors and Enterprise Procurement teams.
  • The specific connection between Policy 5.1 and regulations like DORA and the EU AI Act.
  • How to confidently pass your certification audit without buying expensive software.

The Business Case: Why This Matters for AI Companies

Let’s kill the “compliance is boring” mindset immediately. If you delete this control, you lose revenue. It is that simple.

The Sales Angle

When you try to close a deal with a bank, a healthcare provider, or a Fortune 500 company, they will send you a Security Questionnaire. One of the first questions is: “Do you have an Information Security Policy approved by management?”

If the answer is “No,” or if you hand them a generic template that still references fax machines, the deal dies. They cannot legally share their data with you. Your policy is your passport to enterprise revenue.

The Risk Angle

For an AI company, the nightmare scenario is not just a hacked website. It is the exfiltration of your model weights or the poisoning of your training data. Without a policy that explicitly defines who can access your S3 buckets or Hugging Face repositories, you have no recourse when an employee makes a mistake. You cannot fire someone for breaking a rule you never wrote down.

The “No-BS” Translation: Decoding the Requirement

The standard uses academic language. Here is what it actually means for a modern AI company running on cloud infrastructure.

ISO 27001 TermTranslation for AI Companies
Information Processing FacilitiesYour AWS/GCP accounts, GPU Clusters, H100s, and MacBooks.
PersonnelML Engineers, DevOps, Prompt Engineers, and Contractors.
Access ControlWho can SSH into production? Who has write access to the main branch?
Information AssetsTraining Datasets, Model Weights, API Keys, Customer Prompts.

Regulatory Context: DORA, NIS2, and the EU AI Act

AI companies are currently in the regulatory spotlight. Annex A 5.1 is your shield against these new laws.

  • EU AI Act: This regulation demands strict governance for high-risk AI systems. You must demonstrate human oversight and data governance. Your Information Security Policy is the document where you formally mandate these controls.
  • DORA (Digital Operational Resilience Act): If you sell to financial institutions in the EU, you are a third-party provider. DORA requires you to have a strategy for digital resilience. Your policy must articulate how you handle ICT risk.
  • NIS2: This directive focuses on supply chain security. As an AI vendor, you are the supply chain. Your policies must prove you are not the weak link.

Toolkit vs. SaaS: Why Ownership Wins

There is a trend of AI companies buying expensive SaaS platforms to “automate” compliance. For this control, that is often a mistake. You need to own your laws, not rent them.

FeatureISO 27001 ToolkitSaaS / GRC Platform
Ownership100% Yours. You keep the files forever.Rented. If you stop paying, you lose your ISMS.
CostOne-off fee. Low impact on burn rate.Expensive monthly subscription.
SimplicityEveryone knows how to use Word. No training needed.Requires training your team on complex new software.
PortabilityUniversal format (PDF/Docx). Auditors love it.Vendor lock-in. Hard to export data if you leave.
CustomisationInfinite. It is your document.Limited to the fields the vendor allows you to edit.

For a fast-moving AI company, the ISO 27001 Toolkit offers the speed and freedom you need without the monthly tax.

Deconstructing the Requirements: The Two-Tier Policy Structure

The ISO 27001:2022 standard encourages a strategic shift towards a two-tier policy structure. This is perfect for AI companies where you don’t want your Sales team reading complex rules about Python libraries.

1. The High-Level Information Security Policy

This is the “Constitution.” It is a high-level declaration approved by the CEO or Founders. It says: “We value security, we follow the law, and we will protect our customer’s data.” Every employee reads this.

2. Topic-Specific Policies

These are the specific laws for specific teams. Examples relevant to ISO 27001 Annex A 5.1 for AI companies include:

  • Access Control Policy: Who accesses the production models?
  • Secure Development Policy: How do we safeguard code and weights?
  • Data Classification Policy: Distinguishing between “Public Training Data” and “Private Customer Data.”
  • Supplier Security Policy: Rules for using OpenAI APIs or AWS infrastructure.

The Process Layer: The Standard Operating Procedure (SOP)

Policies are what you do. Processes are how you do it. Here is a sample SOP for managing these policies using tools like Google Workspace and Linear.

Policy Creation and Review SOP

  1. Trigger: Annual review date or a significant change (e.g., integrating a new LLM provider).
  2. Drafting (Google Docs): The CISO or Lead Engineer updates the policy document in the “ISMS/Policies” folder in Google Drive. Track changes are enabled.
  3. Review (Linear): A ticket is created in Linear tagged “Security”. The Draft is attached. The CTO reviews the changes and comments.
  4. Approval (Manual/Email): The CTO or CEO sends an email or signs the document digitally stating: “I approve version 2.0 of the Access Control Policy.” Save this email as PDF.
  5. Publishing (Intranet): The Policy is saved as a read-only PDF and uploaded to the company Notion or Intranet.
  6. Communication (Slack): A message is posted in the #general channel: “Team, the Access Policy has changed. Please read the new section on API keys.”

The Evidence Locker: What the Auditor Needs to See

Auditors do not trust your word. They trust your evidence. For the audit week, prepare a folder with these exact files:

  • Signed Policy PDF: The high-level policy with a visible signature from the CEO or Founder.
  • Meeting Minutes: A PDF export of the Management Review meeting notes where the policy was discussed and approved.
  • Slack Export / Screenshot: A screenshot of the Slack announcement telling the company about the new policy.
  • Onboarding Checklist Export: A CSV export from your HR system (or a screenshot of a Linear onboarding ticket) showing that a new hire clicked “I accept” on the policies.
  • Version History: A screenshot of the Google Doc version history showing that the document has been edited and reviewed over time.

Handling Exceptions: The “Break Glass” Protocol

Strict policies break production. Sometimes you need to fix a bug in the model pipeline at 3 AM. You need a “Break Glass” procedure so you don’t fail your audit when you save the company.

The Protocol:

  1. Emergency Access: Engineer requests Admin access to the production environment.
  2. The “Ticket”: If time permits, log a Linear ticket. If not, proceed and log immediately after.
  3. Time Limit: Access is granted for a specific window (e.g., 4 hours).
  4. Post-Incident Review: The CTO reviews the logs the next day to ensure only the necessary actions were taken. This review is documented on the ticket.

Avoiding Common Pitfalls: Top 3 Mistakes and How to Prevent Them

I see AI companies fail this control constantly. Here are the top 3 non-conformities specific to your industry.

1. The “Shadow IT” Gap

The Failure: You have policies for AWS, but your Data Science team is using a credit card to spin up instances on Lambda Labs or using a personal Hugging Face account for company code.

The Fix: Update your policy scope to include “All cloud processing environments regardless of procurement method.”

2. The “Automated” SaaS Trap

The Failure: You use a GRC platform that says you are “100% Compliant,” but the policies in the platform are generic templates that mention “tape drives” and don’t mention “model weights.” The auditor reads one paragraph and knows you haven’t read it.

The Fix: Use the ISO 27001 Toolkit to customise the documents to reflect your actual tech stack.

3. The “Set and Forget” Error

The Failure: The policy was written two years ago. Since then, you have moved from Azure to GCP and started using Generative AI. The policy is now a work of fiction.

The Fix: Schedule a recurring calendar invite for “Policy Review” every 6 months or after every major infrastructure change.

By treating your policies as living documents that you own and control, you satisfy the auditor, close the enterprise deal, and actually secure your AI company.

ISO 27001:2022 Annex A 5.1 for AI Companies FAQ

What is ISO 27001 Annex A 5.1 for AI companies?

ISO 27001 Annex A 5.1 requires AI companies to define, document, and communicate a suite of information security policies. For AI firms, this ensures 100% of data handling processes for training sets and model weights are governed by management. Effective policies mitigate the 60% of breaches caused by poor governance in rapid-scale environments.

What are the mandatory requirements for AI security policies?

Policies under Annex A 5.1 must be management-approved and communicated to 100% of personnel. For AI organisations, these documents should include specific protocols for:

  • Data lifecycle management for Large Language Models (LLMs).
  • Bias mitigation and algorithmic accountability in automated decision-making.
  • Secure handling of proprietary training data and 3rd-party datasets.
  • Incident response procedures for AI-specific threats like prompt injection.

Why is management approval critical for Annex A 5.1?

Management approval provides the formal authority required to enforce security protocols across an organisation. Statistical audits show that 85% of successful ISMS implementations in AI startups are driven by C-suite accountability. This top-down commitment ensures that security budgets—often 10-15% of total R&D—are allocated to protect critical intellectual property.

How often should AI security policies be reviewed?

AI security policies should be reviewed at least annually or when significant shifts in model architecture occur. Given that AI technology cycles often refresh every 6 months, an agile review process ensures that 100% of the policy framework remains relevant to emerging vulnerabilities. Documented reviews are essential evidence for auditors during ISO 27001 Stage 2 certification.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top