ISO 27001:2022 Annex A 8.32 Change Management for AI Companies

ISO 27001 Annex A 8.32 for AI Companies

ISO 27001 Annex A 8.32 Change Management for AI Companies

ISO 27001 Annex A 8.32 Change Management is a security control that establishes a formalised framework for modifications to systems and data pipelines. By implementing this protocol, AI companies achieve unparalleled accountability and traceability, protecting critical algorithms while ensuring operational stability during rapid innovation cycles.

For artificial intelligence companies, rapid innovation is the lifeblood of the business. However, uncontrolled changes to systems, models, and data pipelines introduce significant security risks that can undermine this progress. ISO 27001 change management, specifically Annex A 8.32, is not a bureaucratic hurdle designed to slow you down. It is a crucial framework for protecting your most valuable assets: your algorithms, proprietary data, and infrastructure.

The “No-BS” Translation: Decoding the Requirement

The Official ISO Text: “Changes to information processing facilities and information systems should be subject to change management procedures.”

The Auditor’s View

The standard expects a formalised, documented approach to every modification. I want to see that you have considered the risk before you hit ‘merge’ or ‘deploy’. If you cannot prove who authorised a change or what the rollback plan was, you have failed the control. It is about accountability and traceability across the entire lifecycle.

The AI Company View

This is for the 25-year-old DevOps engineer: don’t just ‘yolo’ a new model version into production or mess with the AWS VPC settings on a whim. Instead of ‘information processing facilities’, think about your MacBooks, AWS/GCP instances, and S3 buckets. Instead of ‘systems’, think about your Docker images and Python libraries. If you change a Terraform script or update a model weight, it needs a paper trail in Jira or Linear that shows someone else gave it the thumbs up and you know how to revert it if the site goes down.

ISO 27001 Toolkit

The Business Case: Why This Actually Matters for AI Companies

The Sales Angle

Enterprise clients are terrified of AI. When they send you a 200-line Security Questionnaire, they will ask: “How do you ensure changes to your algorithms do not introduce bias or security holes?” If you have a solid Annex A 8.32 process, you can show them a professional audit trail. This closes deals because it proves you aren’t a ‘black box’ startup running on hope: you are a structured partner.

The Risk Angle

The nightmare scenario isn’t just a bug: it is a Data Leak or Vendor Bankruptcy caused by a rogue update. Imagine a developer inadvertently opening an S3 bucket to the public during a ‘quick fix’ or pushing a model update that starts hallucinating sensitive customer data. Without change management, these errors go unnoticed until the ICO or a regulator is knocking on your door.


Why the ISO 27001 Toolkit Beats SaaS Platforms

SaaS GRC platforms want to rent you your own compliance. They wrap simple requirements in complex interfaces that require constant training. Here is why the ISO 27001 Toolkit is the gold standard:

Feature The ISO 27001 Toolkit Online SaaS/GRC Platform
Ownership You keep your files forever, you don’t rent them. You lose access the moment you stop paying.
Simplicity Everyone knows Word and Excel, no training needed. Steep learning curves and proprietary UIs.
Cost One-off fee. No recurring debt. Expensive monthly subscriptions.
Freedom No vendor lock-in. Your docs, your way. You are stuck in their walled garden.

Top 3 Non-Conformities for AI Companies Using SaaS Platforms

  1. The “Ghost Process” Error: Companies buy a SaaS tool and assume it ‘does’ the compliance. The auditor finds a beautiful dashboard but zero evidence of actual peer reviews in GitHub or Jira. The tool is a facade.
  2. Automated Evidence Gaps: SaaS platforms often ‘pull’ data via API but miss manual changes. If a CTO manually tweaks a production database, the SaaS tool often misses it, leading to a major non-conformity when the auditor finds the manual log.
  3. The “Copy-Paste” Policy: SaaS tools provide generic policies. For AI companies, these policies fail to mention model versioning or data pipeline integrity. Auditors spot these generic, unedited documents in seconds.

DORA, NIS2, and AI Laws

Change management is the backbone of modern regulation:

  • DORA: Requires ‘ICT Change Management’ to ensure digital resilience in financial services. If you sell AI to banks, this control is non-negotiable.
  • NIS2: Focuses on supply chain security. You must prove that your updates do not introduce vulnerabilities into your customers’ environments.
  • EU AI Act: Mandates ‘Quality Management Systems’ and traceability for high-risk AI. This control is your primary evidence for model versioning and data lineage.

The Evidence Locker: What the Auditor Needs to See

Stop the audit panic by having these artifacts ready:

  • The Change Log: An export from Jira/Linear/GitHub showing a history of ‘Normal’ changes.
  • Pull Request Records: Proof of peer review (the ‘four-eyes’ principle) for code changes affecting production.
  • Deployment Logs: Screenshots of your CI/CD pipeline (e.g., GitHub Actions, CircleCI) showing successful tests before deployment.
  • Rollback Proof: A documented test record showing that you have successfully tested a ‘revert’ or rollback of a change.

Handling Exceptions: The “Break Glass” Protocol

Strict rules are great until production is down and the API is 500ing. You need an emergency path:

  • The Emergency Path: Bypassing the usual 24-hour wait time for a critical fix. The CTO or Head of Engineering acts as the ‘Break Glass’ authority.
  • The Paper Trail: Every emergency change must have a retroactive ticket created within 24 hours explaining the fix and the impact.
  • Time Limits: Temporary access (like AWS IAM roles) must expire automatically after 4 hours to ensure the ‘Break Glass’ isn’t left open.

The Process Layer: Standard Operating Procedure (SOP)

How an AI company handles a change in a structured way:

  1. Request: Submit a Linear ticket tagged ‘Change’ with a description of the ‘Why’.
  2. Approval: Technical lead reviews the code and the risk assessment. Approval is recorded in the ticket comments.
  3. Provisioning: Code is pushed to a ‘Staging’ environment for automated testing.
  4. Revocation: Once deployed to ‘Production’, any temporary admin permissions used for the deployment are revoked.

Frequently Asked Questions (FAQ) for AI Companies

Do we really have to follow this for every minor bug fix?

No. Use change classification. ‘Standard’ changes are low-risk and pre-approved. ‘Normal’ changes need the full process. Only high-impact updates require the ‘bureaucracy’.

Is Annex A 8.32 mandatory?

Yes. If you process data or build software, you cannot justify excluding this. An auditor will laugh you out of the room if you try to claim change management isn’t applicable to an AI company.

How do we handle changes to model weights?

Treat a model weight update as a ‘Normal’ change. It can affect the integrity of your output just as much as a code change. Log the version, the test results, and the approval.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top