ISO 27001:2022 Annex A 5.10 Acceptable use of information and other associated assets for AI Companies

ISO 27001 Annex A 5.10 for AI Companies

ISO 27001 Annex A 5.10 Acceptable use of information and other associated assets is a security control that requires organizations to identify, document, and implement rules for asset handling. For AI companies, this control is essential to prevent unauthorized data exposure and Shadow AI risks, ensuring that your intellectual property remains secure throughout the model development lifecycle.

If you are working towards ISO 27001 certification, you might view ISO 27001 Annex A 5.10 Acceptable use of information as just another form to fill out. Viewing the Acceptable Use control as a bureaucratic hurdle is a mistake. This control is actually your foundation for managing the most unpredictable part of security: people.

For AI companies, this is even more critical. Your team constantly creates and moves your most valuable assets, proprietary data, unique algorithms, and intellectual property. The security of these assets depends on clear rules. This guide helps you break down ISO 27001 Annex A 5.10 for AI companies into simple steps. We will move past the “checkbox” mindset to build a culture of security that is compliant and strong.

The “No-BS” Translation: Decoding the Requirement

Let’s strip away the consultant-speak. Annex A 5.10 isn’t about telling people not to watch Netflix on their lunch break. It is about stopping your engineers from accidentally leaking your IP.

The Auditor’s View (ISO 27001)The AI Company View (Reality)
“Rules for the acceptable use of information and of associated assets shall be identified, documented and implemented.”Write down exactly what devs can and cannot do. Can they paste code into ChatGPT? Can they download the training set to their local laptop? If you don’t say “No,” they will do it.
“Personnel… shall be made aware of the acceptable use of assets.”Don’t hide the policy in a dusty SharePoint folder. Make them sign it on Day 1. If they leak data and you didn’t tell them not to, that is your fault, not theirs.
“Assets shall be returned upon termination of employment.”When a dev quits, you need their MacBook back, but you also need to revoke their API keys and GitHub access immediately.
ISO 27001 Toolkit

The Business Case: Why This Actually Matters for AI Companies

Why should a founder care about an “Acceptable Use Policy”? Because in the AI world, your assets are digital, fluid, and easy to steal.

The Sales Angle

Enterprise clients are terrified of “Shadow AI.” When they send you a security questionnaire, they will ask: “Do you have a policy preventing your staff from sending our data to third-party LLMs?”. If you can’t produce a signed AUP that explicitly forbids this, you look like a risk. A robust A 5.10 control proves you have discipline.

The Risk Angle

The “Oops” Leak: The biggest risk isn’t a hacker; it’s a well-meaning engineer pasting a proprietary algorithm into a public forum to debug it. Without a clear Acceptable Use Policy, you have no recourse. This control sets the legal and cultural guardrails to prevent accidental IP suicide.

DORA, NIS2 and AI Regulation: The Human Firewall

Regulators know that human error is the root cause of most breaches. Annex A 5.10 is your tool to demonstrate compliance with these emerging laws.

  • DORA (Article 16): Requires financial entities to have policies on the “use of ICT services.” If you sell to fintech, your AUP must align with their strict standards on data handling.
  • NIS2 Directive: Focuses on “Cyber Hygiene.” A core part of hygiene is ensuring staff know how to handle data securely. An AUP is the primary evidence for this.
  • EU AI Act: Implies strict governance over training data. Your AUP must explicitly state who is allowed to touch raw training data and how it can be used, preventing unauthorised “pollution” of your models.

ISO 27001 Toolkit vs SaaS Platforms: The Policy Trap

SaaS platforms love to “automate” policy acceptance. But ticking a box on a screen isn’t the same as understanding the rules. Here is why the ISO 27001 Toolkit wins on this control.

FeatureISO 27001 Toolkit (Hightable.io)Online SaaS Platform
OwnershipYou keep the signed PDF. It sits in your legal drive. If you get sued, you have the document ready.Rented Evidence. If you stop paying the subscription, you lose the logs proving your staff accepted the policy.
Customisation100% Editable. Add specific clauses about “Hugging Face” or “Github Copilot” in seconds using Word.Generic Templates. Most SaaS tools offer a vanilla AUP that doesn’t cover modern AI risks, leaving you exposed.
SimplicityNo login required. Send the doc via DocuSign or email. Zero friction for new hires.Login Fatigue. Forcing a contractor to create an account on your GRC platform just to sign a policy is a waste of time.
CostOne-off fee. Pay once, use forever.Per-User Pricing. As you hire more devs, the cost of just “hosting” your policy goes up.

Decoding Annex A 5.10: What is Acceptable Use?

Before implementing controls, you need to understand their purpose. A clear grasp of Annex A 5.10 is vital for building a framework that passes an audit. Here is what the official mandate means for you.

The Official Mandate

The ISO 27001 standard gives a direct definition for control A 5.10. It states that rules for acceptable use and procedures for handling information must be identified, documented, and implemented. An auditor will check this against a three-part structure:

  • Identified: Have you defined specific rules for your AI context, or is it just a generic template?
  • Documented: Are these rules written down in a formal policy?
  • Implemented: Is there proof these rules are active in your company?

A well-written policy isn’t enough. You need proof that it is a living part of your system.

The Core Purpose: Your First Line of Defence

Think of Annex A 5.10 as a preventive measure. It sets the “ground rules” for everyone who accesses your assets. The goal is to remove “plausible deniability.” You cannot hold someone responsible for breaking a rule if they didn’t know it existed.

By ensuring every user knows the boundaries, you build a strong defence against insider threats. This applies to everyone, from your lead data scientists to third-party contractors.

Building Your Cornerstone: The Acceptable Use Policy (AUP)

The Acceptable Use Policy (AUP) is the main document for Annex A 5.10. It is more than a list of rules; it is the bedrock of accountability. Platforms like hightable.io can be excellent resources for structuring these policies effectively.

Defining the Scope: What Assets Are Covered?

Your AUP covers more than just laptops. It applies to all assets in your organisation. An auditor will check if your scope matches your inventory. Make sure to include:

  • Hardware: Laptops, phones, and GPU servers.
  • Software: Operating systems, AI models, and code libraries.
  • Services: Cloud platforms (SaaS, IaaS), email, and hosting.
  • Data: Training datasets, databases, and documents.

Navigating Modern IT: Cloud Services and Shadow AI

ISO 27001 Annex A 5.10 for AI companies extends beyond your physical office. Auditors look closely at how you handle external services. Since AI relies heavily on cloud infrastructure, this is non-negotiable.

You are responsible for assets outside your network perimeter. First, identify all cloud resources and add them to your inventory. This links back to control A 5.9.

The Risk of Shadow AI “Shadow IT” happens when employees use unapproved tools to work faster. For an AI firm, this might mean pasting code into an unapproved online tool like a PDF summariser or a code optimiser. This violates handling rules.

To an auditor, this looks like a lack of control. Your AUP must clearly state the approval process for new tools. If you need a robust way to track these assets and risks, tools like hightable.io can help centralise your inventory and policy management.

The Evidence Locker: What the Auditor Needs to See

When the audit comes, you need proof. Do not scramble. Prepare these 3-5 specific artifacts to turn “audit panic” into a simple file-gathering exercise.

  • Signed AUPs (PDFs/DocuSign): A folder containing the signed policy for every active employee and contractor.
  • Onboarding Checklist (Ticket Export): Jira/Linear tickets for recent hires showing “Policy Acceptance” was a completed task.
  • Asset Return Log (Spreadsheet): A log showing that when “John Doe” left, his laptop was returned, and his AWS access was revoked on the same day.
  • Policy Review Record (Meeting Minutes): Evidence that management reviewed the AUP in the last 12 months to ensure it covers new AI tools.

Common Pitfalls & Auditor Traps

Here are the top 3 ways AI companies fail this control, especially when using automated platforms:

  • The “Click-Through” Fatigue: You use a SaaS platform that forces users to click “I Accept” on 20 policies in 5 minutes. The auditor interviews a dev and asks, “What does the policy say about AI tools?” The dev has no idea. Instant non-conformity.
  • The “Ghost” Contractor: You have 5 freelance data labelers. They have access to your data but never signed the AUP because they aren’t in your HR system. This is a critical gap.
  • The “Generic” Policy: Your policy talks about “clean desks” and “fax machines” but says nothing about LLMs, Github, or S3 buckets. It proves you simply copy-pasted a template without reading it.

Handling Exceptions: The “Break Glass” Protocol

Sometimes, you need to break the rules to ship. Maybe you need to use a specific unapproved tool to fix a P0 incident. You need a protocol for this.

The Exception Workflow:

  • Request: Engineer logs a ticket: “Requesting exception to AUP to use Tool X for 4 hours to debug incident.”
  • Approval: CISO/CTO approves via the ticket.
  • Constraint: A time limit is set (e.g., access revoked after 4 hours).
  • Audit Trail: The ticket serves as evidence that the violation was managed, not ignored.

The Process Layer: “The Standard Operating Procedure (SOP)”

How to operationalise A 5.10 using your existing stack (Google Workspace, Linear).

  • Step 1: Onboarding (Automated). New user created in Google Workspace. A script sends the “Welcome” email containing the AUP link (DocuSign/PandaDoc).
  • Step 2: Verification (Manual). HR or Ops checks the signature status before unlocking the “Developers” group in AWS/Google. No signature, no access.
  • Step 3: Enforcement (Automated). Use a tool like Gam” (Google Apps Manager) or MDM to block personal Gmail access on corporate devices, enforcing the “Work Use Only” policy.
  • Step 4: Offboarding (Manual). When a user leaves, a Linear ticket is created. The “Asset Return” checklist is mandatory before closing the ticket.

For an AI company, where value lies in data and IP, this control is your anchor. By implementing ISO 27001 Annex A 5.10 for AI companies correctly, you turn a static document into a dynamic tool for success.

ISO 27001:2022 Annex A 5.10 for AI Companies FAQ

What is ISO 27001 Annex A 5.10 for AI companies?

ISO 27001 Annex A 5.10 requires AI companies to define and communicate rules for the acceptable use of information and assets. For AI firms, this specifically covers 100% of interactions with Large Language Models (LLMs) and the handling of proprietary training datasets to prevent unauthorised data disclosure.

How does Annex A 5.10 prevent AI data leakage?

Annex A 5.10 prevents leakage by establishing strict “Acceptable Use” boundaries. By implementing a formal AI policy, organisations can reduce the risk of accidental PII exposure in public AI prompts by up to 60%, ensuring employees only use approved, enterprise-grade AI environments that respect data sovereignty.

What are the key requirements for an AI Acceptable Use Policy?

A compliant AI Acceptable Use Policy (AUP) must include specific technical and behavioural constraints. Key requirements include:

  • Approved Tooling: A defined whitelist of sanctioned AI platforms, LLMs, and APIs.
  • Data Input Rules: Explicit prohibitions on entering trade secrets, source code, or customer PII into public AI models.
  • Output Verification: Mandatory “Human-in-the-Loop” (HITL) reviews to validate the accuracy and safety of AI-generated content.
  • Access Governance: Strict GPU usage and cloud-compute limits to prevent resource abuse.

What is the risk of “Shadow AI” under Annex A 5.10?

Shadow AI refers to the unauthorised use of AI tools outside of corporate IT governance. Under Annex A 5.10, firms must mitigate this by providing safe alternatives; failing to do so leads to “Policy Drift,” where approximately 40% of employees may inadvertently use personal AI accounts for sensitive company tasks.

What evidence do auditors look for in Annex A 5.10 compliance?

Auditors require documented evidence of policy communication and enforcement. This includes a signed Acceptable Use Policy (AUP) containing AI-specific clauses, records of staff security awareness training, and technical logs from CASB tools demonstrating that only approved AI assets are being accessed.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top