ISO 27001:2022 Annex A 5.16 Identity management for AI Companies

ISO 27001 Annex A 5.16 for AI Companies

ISO 27001 Annex A 5.16 Identity management is a security control that requires organizations to manage the full lifecycle of user and machine identities. For AI companies, this control is critical to prevent unauthorized access to model weights and training pipelines, ensuring that only verified entities can interact with high-value intellectual property and automated ML workflows.

In the world of Artificial Intelligence, development moves at lightning speed. While your focus is rightly on building groundbreaking models and leveraging powerful datasets, foundational security practices are what protect these invaluable assets. Effective identity management is one of the most critical, yet unsung, heroes in your security strategy, and the 2022 revision of ISO 27001 makes it more important than ever. The standard has finally caught up with the reality of modern AI development, making Annex A 5.16 the primary strategic control for managing the complete lifecycle of all identities – including the automated pipelines and service accounts that drive your innovation.

This guide is designed to demystify ISO 27001 Annex A 5.16 Identity management specifically for an AI business like yours. We will analyse the unique identity-related risks you face and provide a clear, actionable path to compliance. By treating identity management not as a bureaucratic hurdle but as a strategic enabler, you can protect your intellectual property, secure your operations, and build a resilient organisational security posture.

The “No-BS” Translation: Decoding the Requirement

Let’s strip away the consultant-speak. Identity Management is about knowing exactly who (or what) is logging into your servers. It stops your “Anonymous” users from becoming “Root” users.

The Auditor’s View (ISO 27001)The AI Company View (Reality)
“The full lifecycle of identities shall be managed.”Start to Finish. When you hire a Dev, you create their account. When you fire them, you delete it immediately. Don’t leave ghost accounts active for 6 months.
“Uniquely identified.”No shared logins. Stop using one admin@company.com account for AWS that 5 developers share. Each human gets one named account. If they do something wrong, you need to know who did it.
“Authentication information.”Kill the passwords. Enforce MFA (Multi-Factor Authentication) or passkeys everywhere. If your GitHub relies on a simple password, you will be hacked.

The Business Case: Why This Actually Matters for AI Companies

Why should a founder care about “IAM” (Identity and Access Management)? Because identity is the only thing standing between a hacker and your model weights.

The Sales Angle

Enterprise clients will ask: “Do you support SSO (Single Sign-On)?” and “How do you manage service account keys?”. If your answer is “We share passwords via Slack,” you lose the deal. If your answer is “We enforce Okta SSO with hardware MFA and rotate service keys daily using HashiCorp Vault,” you win the contract. Annex A 5.16 proves you have maturity.

The Risk Angle

The “Service Account” Breach: Most AI hacks don’t target humans; they target the “bot” accounts (Service Accounts) that have permission to read S3 buckets. If you don’t manage the lifecycle of these non-human identities, hackers will find an old, forgotten key on GitHub and drain your data. A 5.16 forces you to inventory and rotate these keys.

DORA, NIS2 and AI Regulation: Identity is the New Perimeter

In a cloud-native world, firewalls matter less. Identity matters more.

  • DORA (Article 9): Mandates “strong authentication” for access to financial data. If you are a fintech AI vendor, simple passwords are illegal. You must implement robust identity lifecycle management.
  • NIS2 Directive: Requires “cyber hygiene.” A core pillar of hygiene is removing stale accounts. If you have 50 active accounts for 30 employees, you are violating NIS2 principles of attack surface reduction.
  • EU AI Act: High-risk AI systems require “logging of events.” You cannot log events accurately if multiple users share one identity (“root”). Unique identification is a prerequisite for regulatory logging compliance.

ISO 27001 Toolkit vs SaaS Platforms: The Identity Trap

SaaS platforms love to integrate with your Identity Provider (IdP) but often miss the “non-human” identities that matter most in AI. Here is why the ISO 27001 Toolkit is superior.

FeatureISO 27001 Toolkit (Hightable.io)Online SaaS Platform
ScopeHumans & Bots. Our templates cover “Service Accounts” and “API Keys,” which are critical for AI pipelines.Human-Centric. Most platforms only track employees. They ignore the 500 API keys your devs generated, leaving a huge blind spot.
OwnershipYour Process. You define the “Joiners, Movers, Leavers” (JML) process that fits your HR workflow. rigid Workflows. The platform forces you to use their “onboarding checklist,” which might not fit your specific engineering needs.
SimplicityChecklists. Simple documents to guide HR and IT. “Did you revoke GitHub access? Yes/No.”Alert Noise. You get spammed with alerts every time a new contractor is added, causing “alert fatigue.”
CostOne-off fee. Manage 10 or 10,000 identities for the same price.Per-User Tax. You pay extra for every user you track. Compliance becomes a tax on hiring.

Understanding the Core Control: What is Annex A 5.16?

To apply this control effectively, you first need to understand its core purpose. The standard has finally caught up with the reality of modern AI development, making Annex A 5.16 the primary strategic control for managing the complete lifecycle of all identities.

Translating ‘Identity’ for an AI Environment

In an AI-driven organisation, the concept of an ‘identity’ extends far beyond your human team.

  • Individuals: Data scientists, ML engineers, and contractors.
  • Systems: Servers, VMs, and cloud services (e.g., Azure, AWS).
  • Non-human entities (Critical): Service accounts for CI/CD pipelines, data ingestion scripts, and model inference APIs. These “bots” often have higher privileges than humans.

The Identity Lifecycle in AI

The “full lifecycle” means managing these identities from creation to deletion. In your workflow, this translates to:

  • Provisioning: Creating a unique identity for a new ML engineer or a new training cluster.
  • Maintenance: Rotating API keys every 90 days. Reviewing permissions when a dev moves teams.
  • De-registration: Deleting the service account when the model is deprecated. Revoking the contractor’s access the minute their contract ends.

Analysing the High-Stakes Risks: Unique AI Challenges

AI companies are prime targets because your identities hold the keys to valuable IP. Adopting a Zero Trust mindset is critical.

Exposure of Sensitive Training Datasets

Failing to manage identities for your automated data processing scripts can lead to catastrophic data breaches. If a single misconfigured service account has “Read All” access to your S3 data lake, an attacker only needs to compromise that one script to steal everything.

Disruption of Algorithmic Processes

Your CI/CD pipelines are operated by non-human identities. If an unauthorized actor gains access to a CI/CD service account, they could inject malicious code into your model training loop, poisoning the model at the source.

Vulnerabilities in the AI Supply Chain

You likely use third-party data sources or labelling services. Each external integration uses an identity (API key). If you don’t rotate these keys, a breach at your supplier becomes a breach at your company.

Your Practical Roadmap: 8 Steps to Compliance

Achieving compliance with Annex A 5.16 requires a structured approach.

  • Step 1: Understand Needs. Define roles before creating accounts.
  • Step 2: Identify Assets. Map identities to the assets they access (e.g., User A -> S3 Bucket B).
  • Step 3: Access Review. Audit who has what access right now.
  • Step 4: Risk Assessment. Identify high-risk accounts (e.g., Admin).
  • Step 5: Develop Policies. Write the “Access Control Policy” and “Password Policy.”
  • Step 6: Implement Controls. Turn on SSO and MFA.
  • Step 7: Training. Teach staff why they can’t share passwords.
  • Step 8: Continual Improvement. Review access quarterly.

The Evidence Locker: What the Auditor Needs to See

When the audit comes, prepare these 3-5 artifacts to prove you are in control:

  • Identity Register (Excel): A list of all active user accounts and service accounts, their owner, and their review date.
  • JML Process Logs (Ticket Export): Jira/Linear tickets showing “New Hire” setup and “Termination” revocation steps completed by IT.
  • Access Review Report (PDF): Evidence that a manager reviewed the “Admin” group in AWS and confirmed membership is correct.
  • Service Account Inventory: A list of non-human accounts (e.g., ci-deployer) and their purpose.

Common Pitfalls & Auditor Traps

Here are the top 3 ways AI companies fail this control:

  • The “Orphaned” Account: A developer left 3 months ago, but their AWS user is still active because “we might need to check their code.” Instant non-conformity. Disable it; don’t delete it if you need logs, but revoke login.
  • The “Shared Root”: You have a post-it note in the office with the root password for the server “in case of emergency.” Or a shared 1Password vault with the root credentials that everyone can see.
  • The “Forever” Key: You generated an API key for a script in 2021 and haven’t rotated it since. The auditor will check the “Key Age.” Anything over 90 days (without justification) is a red flag.

Handling Exceptions: The “Break Glass” Protocol

Sometimes you need a shared account for an emergency (e.g., the AWS Root account). You need a protocol.

The Emergency Identity Workflow:

  • Storage: The root password/MFA token is stored in a physical safe or a “break glass” vault in a password manager, accessible only to the CEO/CTO.
  • Alerting: Accessing this vault triggers an immediate alert to the security team.
  • Audit: After the incident, the password is rotated immediately. A log is created detailing who used it and why.

The Process Layer: “The Standard Operating Procedure (SOP)”

How to operationalise A 5.16 using your existing stack (Google Workspace, AWS, Linear).

  • Step 1: Onboarding (Automated). HR adds a user to Google Workspace. SSO automatically creates their AWS and Linear accounts.
  • Step 2: Service Accounts (Manual). Engineer requests a bot account via a Linear ticket. “I need a service account for the new scraper.”
  • Step 3: Provisioning (Automated). Terraform creates the service account and outputs the credentials to a secure vault, sharing them only with the requester.
  • Step 4: Review (Manual). Quarterly, a script lists all users and service accounts. Managers must reply “Keep” or “Delete” on the ticket.
  • Step 5: Offboarding (Automated). Suspending the user in Google Workspace triggers a cascade that revokes access to all connected SSO apps immediately.

By treating identity management not as a bureaucratic hurdle but as a strategic enabler, you can protect your intellectual property, secure your operations, and build a resilient organisational security posture.

ISO 27001 Annex A 5.16 for AI Companies FAQ

What is ISO 27001 Annex A 5.16 for AI companies?

ISO 27001 Annex A 5.16 requires AI companies to manage the full lifecycle of digital identities. For AI firms, this involves ensuring 100% of human and machine identities accessing sensitive training datasets, GPU clusters, and model deployment pipelines are uniquely identified and verified to prevent unauthorised access.

   

How does Annex A 5.16 protect AI model integrity?

   

Annex A 5.16 protects integrity by preventing identity spoofing within the ML Ops lifecycle. Implementing robust identity management can reduce the risk of unauthorised model “poisoning” or adversarial manipulation by up to 50%, ensuring that only verified researchers can modify training parameters or weights.

   

Does Annex A 5.16 apply to AI service accounts and machine identities?

   

Yes, Annex A 5.16 applies to machine identities, which often outnumber human users 5 to 1 in AI environments. Automated service accounts used for API calls between LLMs and vector databases must be uniquely identified, governed by strict lifecycle management, and restricted to the principle of least privilege.

   

What are the identity management requirements for AI startups?

   

AI startups must implement scalable identity solutions to maintain compliance. Key requirements include:

   
           
  • Unique Identification: Prohibiting shared accounts for 100% of administrative and developer access.
  •        
  • Strong Authentication: Enforcing Multi-Factor Authentication (MFA) across all cloud and on-premise AI assets.
  •        
  • Automated Provisioning: Using SCIM or similar protocols to ensure identities are created and revoked in real-time.
  •        
  • Identity Federation: Leveraging Single Sign-On (SSO) to centralise control over fragmented AI tooling and SaaS platforms.
  •        
   

What evidence proves Annex A 5.16 compliance in an audit?

   

Auditors require documented proof of identity governance. Necessary evidence includes an Identity Management Policy, timestamped logs showing the creation and deactivation of user accounts, records of MFA enforcement across 100% of the workforce, and logs of periodic identity access reviews for sensitive ML environments.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top