We have all been there. You try to log on to a website the moment a big sale starts or concert tickets are released, and the screen just spins. The site crashes. It’s frustrating for the user, but for the business, it is a disaster. It means lost revenue, damaged reputation, and angry customers.
In the world of Information Security, this isn’t just an “IT problem”; it’s a security risk. If your security tools crash because they run out of memory, or your log server fills up its hard drive, you are flying blind. This is exactly what ISO 27001:2022 Annex A 8.6 aims to prevent.
This control, titled Capacity Management, is about ensuring your systems have the muscle to handle the workload—both today and tomorrow. Let’s break down how to implement this practically without needing a degree in data science.
Table of contents
What is Annex A 8.6?
In the 2022 update of the standard, Annex A 8.6 falls under the “Technological Controls” category. The requirement is straightforward: The use of resources shall be monitored, tuned, and projections made of future capacity requirements to ensure the required system performance.
Simply put, you need to prove that you are watching your systems (Monitoring), fixing inefficiencies (Tuning), and planning for growth (Forecasting). The goal is to ensure availability. If a system goes down because the disk is full, you have failed the availability pillar of the CIA triad (Confidentiality, Integrity, Availability).
Step 1: Identify Your Critical Resources
You can’t manage capacity if you don’t know what resources you rely on. Start by looking at your critical assets. For most organisations, capacity management revolves around four key pillars:
- Compute (CPU): Is the processor running at 99% constantly?
- Memory (RAM): Are applications crashing because they can’t allocate memory?
- Storage (Disk/Cloud): Is the hard drive full? Is your S3 bucket approaching a quota?
- Network (Bandwidth): Is the internet connection creating a bottleneck?
Don’t forget the human element. While this control is technically focused, a lack of human capacity (not enough staff to check the logs) often leads to the technical failures.
Step 2: Establish a Baseline and Monitor
To know if something is wrong, you first need to know what “right” looks like. This is your baseline. If your file server usually runs at 40% CPU usage, a sudden jump to 90% is an anomaly worth investigating.
You need to implement monitoring tools. For modern cloud environments (AWS, Azure, Google Cloud), this is often built-in via dashboards like CloudWatch or Azure Monitor. For on-premise servers, you might use tools like Nagios, PRTG, or Datadog.
Auditor Tip: It is not enough to just have the tool. You need to configure alerts. An auditor will ask: “How do you know if the server is full?” If your answer is “I check it every Monday,” you might struggle. The correct answer is “I get an email alert when it hits 80%.”
Step 3: Forecast and Project Future Needs
This is where Capacity Management differs from simple Monitoring. Monitoring tells you what is happening now. Capacity Management tells you what will happen next month.
You need to look at trends. If your database grows by 10GB every month, and you have 50GB of free space left, you know you have exactly five months before a crisis occurs. ISO 27001 requires you to spot this trend before the crisis hits.
Common triggers for capacity planning include:
- New Projects: Launching a new software tool? Estimate the load it will add.
- Seasonal Spikes: Do you work in retail? Prepare for Black Friday.
- Business Growth: If you hire 50 new staff, do you have enough VPN licences?
Step 4: Tune and Optimise
Buying more hardware isn’t always the answer. Sometimes, the solution is to “tune” what you already have. This aligns with the ISO requirement to optimise system performance.
Before you issue a purchase order for a new server, ask:
- Can we archive old data to free up space? (See Annex A 8.10 Information Deletion).
- Is the application code inefficient?
- Are there memory leaks in the software?
- Can we compress files to save bandwidth?
Step 5: Cloud and Auto-Scaling
If you are fully cloud-native, you might think, “I don’t need capacity management; the cloud is infinite.” This is a dangerous trap.
While the cloud offers elasticity (the ability to scale up and down automatically), it is not magic. You still need to configure the limits. If you don’t set auto-scaling rules, your application will crash just like a physical server. Conversely, if you set no limits, a bug could spin up 1,000 servers and bankrupt your company overnight. In the cloud, Capacity Management often becomes Cost Management.
Step 6: Documentation and Evidence
As with all things ISO 27001, if it isn’t written down, it didn’t happen. You need a Capacity Management Plan or procedure. This doesn’t need to be a 50-page thesis, but it should outline:
- What you monitor.
- What your thresholds are (e.g., “Alert at 80%, Critical at 90%”).
- How often you review capacity reports (e.g., Monthly or Quarterly).
If you are struggling to structure this documentation, Hightable.io offers comprehensive ISO 27001 toolkits. Their templates can give you a solid Capacity Management Policy structure that ensures you don’t miss any requirements of the standard.
Common Pitfalls to Avoid
- Being Reactive: Waiting for the disk to fill up before deleting files. This is Incident Management, not Capacity Management.
- Ignoring Logs: Security logs can be huge. If you turn on detailed logging (Annex A 8.15) without checking your storage capacity, you will crash the server.
- Forgeting Licences: Capacity isn’t just hardware. Running out of user licences for your CRM is a capacity failure that stops business operations.
Conclusion
Implementing Annex A 8.6 is about being proactive. It shifts your IT team from “firefighters” to “architects.” By monitoring your current state, tuning your performance, and predicting your future needs, you ensure that your business can keep running no matter what demand is thrown at it.
Start small: identify your most critical server, set up an alert for disk space and CPU, and review it next month. That is your first step towards compliance.
About the author
Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management.
Holding an MSc in Software and Systems Security, Stuart combines academic rigor with extensive operational experience. His background includes over a decade leading Data Governance for General Electric (GE) across Europe, as well as founding and exiting a successful cyber security consultancy.
As a qualified ISO 27001 Lead Auditor and Lead Implementer, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. He has successfully guided hundreds of organizations – from high-growth technology startups to enterprise financial institutions – through the audit lifecycle.
His toolkits represents the distillation of that field experience into a standardised framework. They move beyond theoretical compliance, providing a pragmatic, auditor-verified methodology designed to satisfy ISO/IEC 27001:2022 while minimising operational friction.

