ISO 27001 Clause 7.1 is a security control that mandates the determination and provision of adequate funding, personnel, and infrastructure to manage an ISMS. The Primary Implementation Requirement involves formal budget allocation and staff competency. This delivers the Business Benefit of ensuring security matures alongside rapid AI innovation.
Listen, if you are running an AI company, you are likely burning through GPU credits and talent at a rate that would make a traditional CFO faint. You are moving fast, and that is fine. But when it comes to ISO 27001 Clause 7.1 Resources, “moving fast” is no excuse for being under-resourced. If you think you can achieve certification by just buying a SaaS subscription and hoping for the best, you are about to have a very expensive reality check during your audit.
Clause 7.1 is not about bureaucracy: it is about putting your money, your people, and your infrastructure where your mouth is. It is the bedrock of your Information Security Management System (ISMS). Without proper resourcing, your security is just a collection of nice ideas on a digital dashboard. This guide is the gold standard for AI companies that want to get resourcing right, stop renting their compliance, and actually own their security posture.
Table of contents
- The “No-BS” Translation: Decoding the Requirement
- The Business Case: Why This Actually Matters
- Why the ISO 27001 Toolkit Beats SaaS Platforms
- Regulatory Alignment: DORA, NIS2, and AI Laws
- The Evidence Locker: What the Auditor Needs to See
- Top 3 Non-Conformities with SaaS Platforms
- Common Pitfalls and Auditor Traps
- Handling Exceptions: The “Break Glass” Protocol
- The Process Layer: Standard Operating Procedure (SOP)
- Frequently Asked Questions (FAQ)
The “No-BS” Translation: Decoding the Requirement
The official standard says: “The organisation shall determine and provide the resources needed for the establishment, implementation, maintenance and continual improvement of the ISMS.”
The Auditor’s View: I want to see that you have actually allocated a budget, assigned competent humans who aren’t already working 80 hours a week on dev tasks, and provided the tools required to keep the lights on. If the ISMS falls apart the moment a consultant leaves, you have failed.
The AI Company View: Look, this means you can’t just tell a 25-year-old DevOps engineer to “do the ISO thing” in their spare time. You need to give them the MacBooks, the AWS cloud credits for security logging, and the actual time to do the work. It means moving security from a “nice to have” Slack channel to a line item in your Jira board and your bank account.
Stop Spanking £10,000s on consultants and ISMS online platforms.
The Business Case: Why This Actually Matters
Resourcing is not just a cost: it is a revenue protector. If you starve your ISMS of resources, these things will happen:
- Sales Angle: When an enterprise client sends you a 300-row Security Questionnaire, they are looking for evidence of a “Security Lead” or “CISO.” If they see you’ve only allocated 5% of a junior dev’s time to security, they won’t trust you with their data. You lose the deal.
- Risk Angle: An under-resourced team misses logs. They miss the “data exfiltration” alert from your GPU cluster because they were too busy shipping a new feature. You end up on the front page of TechCrunch for all the wrong reasons.
- Vendor Bankruptcy: If you rely on a SaaS GRC platform and they go bust, you lose your entire security framework because you rented it. Proper resourcing means owning your files and your processes.
Why the ISO 27001 Toolkit Beats SaaS Platforms
I see AI companies falling for the SaaS GRC trap every week. They spend £15,000 a year on a dashboard that doesn’t actually solve the resource problem. Here is why the ISO 27001 Toolkit is the superior choice:
| Feature | ISO 27001 Toolkit (Word/Excel) | Expensive SaaS GRC Platforms |
|---|---|---|
| Ownership | You own the files forever, you don’t rent them. | Stop paying, and your ISMS disappears. |
| Simplicity | Everyone knows how to use Word and Excel, no training needed. | Complex UI that requires weeks of team training. |
| Cost | One-off fee. Massive ROI for AI startups. | Expensive monthly subscriptions that never end. |
| Freedom | No vendor lock-in. Move your docs anywhere. | You are trapped in their ecosystem and API limits. |
Regulatory Alignment: DORA, NIS2, and AI Laws
Resourcing is no longer optional under new laws. If you operate in the EU or UK, Clause 7.1 helps you stay out of court:
- DORA (Digital Operational Resilience Act): Financial clients will demand proof that you have “adequate ICT resources.” Clause 7.1 is your evidence that you have allocated the staff and systems to survive a cyber-attack.
- NIS2: This directive mandates management accountability. If you haven’t provided the resources for security, the C-Suite can be held personally liable for failures.
- EU AI Act: High-risk AI systems require “human oversight.” That oversight is a resource. Clause 7.1 ensures you have the headcount to meet these legal safety requirements.
The Evidence Locker: What the Auditor Needs to See
Don’t panic when the auditor arrives. Have these artifacts ready in a folder:
- Information Security Budget: A signed PDF or screenshot of a spreadsheet showing allocated spend for security tools, training, and audits.
- Job Descriptions (JDs): Updated JDs for your DevOps, MLOps, and IT leads that explicitly mention “Information Security responsibilities.”
- The Competency Matrix: An Excel sheet (included in the toolkit) showing who is responsible for what and proof of their training.
- Signed RACI Matrix: A document showing who is Responsible, Accountable, Consulted, and Informed for every ISO 27001 control.
Top 3 Non-Conformities with SaaS Platforms
AI companies using SaaS “automation” platforms often get hit with these major non-conformities:
- Lack of Ownership: The auditor asks a question, and the staff says, “I don’t know, the platform does that.” Result: Major NC for Clause 7.1 because the organization hasn’t determined its own resources.
- Stale Data: The platform shows a “green tick,” but the actual resource (like a server) was decommissioned months ago. Result: Minor NC for lack of maintenance.
- Incompetent Implementation: The team followed the platform’s “automated” suggestions without understanding the risk. Result: NC for Clause 7.2 (Competence) which stems directly from failing to resource the project with knowledgeable people.
Common Pitfalls and Auditor Traps
- The “Copy-Paste” Policy: Using a generic policy from a SaaS tool that says you have a “Security Committee” that meets monthly when you actually don’t. An auditor will check your meeting minutes and fail you instantly.
- The “Shadow IT” Gap: Failing to resource security for new AI tools (like Midjourney, Claude, or custom LLM APIs) that your team is using without formal approval.
- The Single Point of Failure: Having one person who knows how the ISMS works. If they leave, the ISMS dies. This is a massive resource risk.
Handling Exceptions: The “Break Glass” Protocol
In a P0 incident, your security protocols might slow down the recovery. You need a way to bypass them without failing your audit.
- The Emergency Path: Use the AWS Root account or bypass 2FA in extreme cases only.
- The Paper Trail: Every “Break Glass” event must trigger a retroactive Linear or Jira ticket tagged “Security Exception.”
- Time Limits: All exceptions must expire within 4 hours. No “permanent” bypasses allowed.
- Review: The CTO must sign off on the exception within 24 hours of the incident being resolved.
The Process Layer: Standard Operating Procedure (SOP)
How to manage resources in an AI environment using your toolstack (Slack, AWS, Linear):
- Step 1 (Request): A new security resource (e.g., a new logging tool) is requested via a Linear ticket.
- Step 2 (Approval): The Information Security Manager reviews the cost vs. risk reduction. Finance approves the spend.
- Step 3 (Provisioning): Infrastructure is deployed via Terraform/AWS. Access is granted only to competent staff.
- Step 4 (Review): The resource is added to the “Asset Register” in your toolkit to ensure it is audited annually.
Frequently Asked Questions (FAQ)
What is ISO 27001 Clause A 7.1 for AI companies?
ISO 27001 Clause A 7.1 requires AI companies to define and use physical security perimeters to protect areas containing sensitive information such as model weights, proprietary training data, and GPU clusters. To ensure 100% compliance, AI firms must implement physical barriers—such as walls, card-controlled entry gates, or manned reception desks—to prevent unauthorised access to critical infrastructure.
Which physical security perimeters are mandatory for AI data centres?
Mandatory perimeters for AI data centres include multi-layered entry controls, environmental monitoring, and 24/7 surveillance for facilities hosting Large Language Models (LLMs). Research indicates that 85% of hardware-related data breaches in high-tech sectors occur due to weak perimeter logging. AI companies must secure the following areas:
- GPU Server Rooms: High-density compute areas requiring biometric or multi-factor physical access.
- Data Storage Vaults: Secured zones for cold storage and backup media containing 1st-party training datasets.
- Network Operation Centres (NOC): Physical hubs where model performance and security telemetry are monitored.
How does Clause 7.1 impact AI companies using cloud-only infrastructure?
AI companies using cloud-only infrastructure meet Clause 7.1 requirements by reviewing the ISO 27001 or SOC 2 Type II reports of their providers (e.g. AWS, Azure, or GCP). Since 95% of the physical perimeter responsibility is transferred to the cloud provider, the AI firm’s internal focus shifts to securing their own office spaces where developers access the cloud console.
What evidence is required for Clause 7.1 compliance during an audit?
Auditors require 100% objective evidence that physical perimeters are functional and monitored. You must provide a site perimeter map, CCTV maintenance logs, and electronic access control reports showing who entered sensitive zones. Failure to produce these logs accounts for approximately 20% of minor non-conformities in Stage 2 audits for tech startups.
How does physical security under A 7.1 support EU AI Act compliance?
Physical security under Clause 7.1 supports EU AI Act compliance by protecting the “high-risk” AI system’s technical infrastructure from tampering or unauthorised modification. Implementing robust perimeters reduces the risk of malicious data poisoning by roughly 40% at the source, satisfying the Article 15 requirements for technical robustness and cybersecurity governance.
Conclusion: Resourcing as a Strategic Advantage
Properly resourcing your ISMS using the ISO 27001 Toolkit is the fastest way to build a world-class security posture without the soul-crushing cost of SaaS platforms. It proves to auditors, clients, and regulators that you are a serious player in the AI market. Stop renting, start owning, and build your security on solid ground.