ISO 27001 Clause 6.3 is a security control that mandates a formal process for the Planning of Changes to ensure the integrity of the ISMS is maintained during modifications. The Primary Implementation Requirement is to assess risks and resource availability before execution, delivering the Business Benefit of preventing production outages and maintaining regulatory compliance while scaling AI infrastructure.
In the high-velocity world of artificial intelligence, rapid innovation isn’t just a goal: it’s survival. But moving fast shouldn’t mean breaking things, especially when those “things” are security protocols protecting proprietary algorithms and sensitive datasets. Your crown jewels are your model weights and training pipelines: treat them with the respect they deserve.
For AI companies, where intellectual property is the absolute core of the business, managing changes to your Information Security Management System (ISMS) needs to be structured and predictable. You need to maintain trust with enterprise clients without slowing down your dev cycles. This is about being professional, not bureaucratic.
This is where ISO 27001 Clause 6.3, “Planning of Changes,” steps in. It provides a formal framework to ensure that as your neural networks and infrastructure evolve, your security posture remains rock solid. If you think you can automate this with a SaaS dashboard, you are in for a shock during your audit.
Table of contents
- The “No-BS” Translation: Decoding the Requirement
- The Business Case: Why This Actually Matters for AI Companies
- Why the ISO 27001 Toolkit Beats SaaS Platforms
- Compliance with DORA, NIS2, and AI Laws
- Your 10-Step Framework for Implementing Clause 6.3
- The Evidence Locker: What the Auditor Needs to See
- Common Pitfalls & Auditor Traps
- Handling Exceptions: The “Break Glass” Protocol
- The Process Layer: Standard Operating Procedure (SOP)
- ISO 27001 Clause 6.3 FAQ
The “No-BS” Translation: Decoding the Requirement
The official ISO text says: “When the organisation determines the need for changes to the information security management system, the changes shall be carried out in a planned manner.”
| The Auditor’s View (The Jargon) | The AI Company View (The Reality) |
|---|---|
| Information Processing Facilities | Your AWS/GCP clusters, MacBooks, and the GitHub repos holding your model weights. |
| Top Management Authority | The CEO and CTO actually giving a damn and putting their names on the security policy. |
| Communicate within the organisation | Putting a “Security Roles” table in your Notion or Confluence and making sure everyone on Slack knows who to ping. |
| Assigned and Documented | Not just “we all do security,” but a CSV or Word doc that says “Dave owns the AWS IAM keys.” |
Stop Spanking £10,000s on consultants and ISMS online platforms.
The Business Case: Why This Actually Matters
Compliance is often seen as a handbrake: it’s a revenue enabler. If you delete this control, here is the damage:
- Sales Angle: Large enterprise clients will grill you on your Change Management. They will ask: “How do you ensure a rogue dev doesn’t introduce a backdoor into the model weights?” If you can’t show a planned change process, you lose the deal.
- Risk Angle (The Nightmare Scenario): You “quick fix” a GPU cluster access issue by opening a port. You forget about it. Six hours later, a script kiddie has exfiltrated your entire training dataset. Your company value hits zero overnight.
- Vendor Bankruptcy: If you rely on a SaaS platform to manage this and they go bust, your “planned changes” history vanishes. Good luck explaining that to the auditor.
Why the ISO 27001 Toolkit Beats SaaS Platforms
Stop renting your compliance. SaaS platforms are expensive, complex, and keep your data hostage. Here is why the ISO 27001 Toolkit is the gold standard:
| Feature | ISO 27001 Toolkit (Word/Excel) | Online SaaS GRC Platforms |
|---|---|---|
| Ownership | You own the files forever. No rent, no expiry. | You rent access. Stop paying, and your ISMS is gone. |
| Simplicity | Everyone knows Word and Excel. No training needed. | Steep learning curve. You need a “platform expert.” |
| Cost | One-off fee. Massive long-term savings. | Expensive monthly subscriptions that never end. |
| Freedom | No vendor lock-in. It’s your documentation. | Your data is trapped in their proprietary system. |
| Auditor Trust | Shows you actually thought about and wrote your processes. | Often looks like a “copy-paste” job, triggering deeper scrutiny. |
Compliance with DORA, NIS2, and AI Laws
Clause 6.3 is a foundational pillar for modern regulation:
- DORA: Financial entities must manage ICT changes strictly. If you’re an AI firm selling to banks, Clause 6.3 is your evidence of operational resilience.
- NIS2: Requires “supply chain security” and “vulnerability handling.” Planned changes ensure you don’t introduce vulnerabilities when updating third-party AI libraries.
- EU AI Act: High-risk AI systems require strict quality management. Clause 6.3 ensures that changes to the model or data don’t degrade safety or fairness.
Your 10-Step Framework for Implementing Clause 6.3
- Establish a Process: Use a simple Word doc from the toolkit to define how changes happen.
- Assess Impact: Check the blast radius. If you update the LLM, does the API security hold?
- Plan Controlled Changes: Map out resources, like GPU availability for testing.
- Authorise: Get the CTO or Lead Engineer to sign off on significant infrastructure shifts.
- Implement: Stick to the plan. Use your existing CI/CD pipelines.
- Test: Don’t just check if it works: check if it’s still secure.
- Communicate: Tell the team via Slack or a Linear update.
- Review: Did the change do what was expected?
- Document: If it’s not in a ticket or a log, it didn’t happen.
- Manage Emergencies: Use a fast-track route for zero-day patches.
The Evidence Locker: What the Auditor Needs to See
When I walk in to audit your AI company, I want to see these four things:
- Change Logs: An export from Jira, Linear, or GitHub showing a sample of changes from the last 6 months.
- Impact Assessments: A few examples where you actually wrote down: “We are updating this library, and we checked that it doesn’t break our IAM roles.”
- Approval Records: Proof that a person (not a bot) authorised a production change.
- Test Reports: Screenshots of successful CI/CD runs or manual QA sign-offs.
Common Pitfalls & Auditor Traps
Most AI companies fail here because of these three things:
- The “Copy-Paste” Error: Using a SaaS platform’s default policy that says you have a “Change Advisory Board” when you are actually 5 guys in a Discord server. I will fail you for lying.
- The “Set and Forget” Error: Configuring your GRC tool once but never actually following the process in your real dev work.
- The “Shadow IT” Gap: Changing SaaS tools (e.g., moving from Slack to Teams) without documenting it as a change to the ISMS.
Handling Exceptions: The “Break Glass” Protocol
Production is down. The model is hallucinating. You don’t have time for a three-day review. The Emergency Path: Use the “Root” or “Admin” account to fix the P0. The Paper Trail: You MUST open a ticket retrospectively within 24 hours explaining why you bypassed the process. Time Limits: Emergency access is granted for the duration of the incident only. Close the “Break Glass” session as soon as the fire is out.
The Process Layer: Standard Operating Procedure (SOP)
- Onboarding: New dev joins, they get read-only access to AWS. They are trained on the “Planning of Changes” policy from the toolkit.
- Daily Routine: All changes are requested via a Linear ticket tagged “Security Change.”
- Manual Step: Peer review of code/config is mandatory. No one-man pushes to prod.
- Offboarding: When a dev leaves, revoking their GitHub and AWS access is a “Planned Change” that is logged and verified.
ISO 27001 Clause 6.3 FAQ
What is ISO 27001 Clause 6.3 for AI companies?
ISO 27001 Clause 6.3 requires AI companies to plan and carry out changes to their Information Security Management System (ISMS) in a controlled manner. For AI firms, this ensures 100% of modifications to model architectures, data pipelines, or security governance are assessed for their impact on ISMS integrity and resource availability.
How does Clause 6.3 differ from DevOps change management?
Clause 6.3 focuses on strategic changes to the ISMS framework itself, while DevOps change management (Annex A 8.32) handles operational code deployments. Approximately 75% of AI startups confuse the two; Clause 6.3 specifically triggers when you pivot your AI model’s purpose, change data providers, or reallocate security responsibilities within the leadership team.
What are the four pillars of planned change in AI systems?
AI companies must demonstrate that ISMS changes address four mandatory criteria to maintain compliance. Implementing these pillars correctly reduces the risk of compliance failure during rapid scaling by up to 40%:
- Purpose: Defining the specific reason for the change, such as migrating to a new vector database.
- Integrity: Ensuring the change does not break existing security controls or EU AI Act compliance.
- Resources: Confirming the 100% availability of staff and GPU compute power required for the transition.
- Authority: Defining who is responsible for the final sign-off on the ISMS modification.
How does Clause 6.3 support EU AI Act compliance?
Clause 6.3 provides the change control evidence required under the EU AI Act’s Quality Management System (QMS) requirements. For high-risk AI systems, documenting ISMS changes ensures 100% traceability of model modifications. This alignment can reduce regulatory documentation overhead by roughly 30% by using a single unified change process.
What evidence do auditors need for AI ISMS changes?
Auditors require objective evidence of a formalised planning process, typically found in Management Review minutes or a dedicated Change Log. AI firms must prove that 100% of major structural changes were reviewed against resource constraints. Failure to provide this evidence accounts for nearly 20% of minor non-conformities in tech-sector Stage 2 audits.