ISO 27001:2022 Annex A 5.25 Assessment and decision on information security events for AI Companies

ISO 27001 Annex A 5.25 for AI Companies

ISO 27001 Annex A 5.25 is a security control that requires organizations to formally evaluate information security events to determine if they classify as incidents. This documented assessment framework ensures operational clarity, providing the business benefit of reduced alert fatigue while securing critical AI model integrity and data assets.

At the heart of a resilient security program is the ability to respond effectively when things go wrong. This is the core purpose of ISO 27001 Annex A 5.25 Assessment and decision on information security events. In simple terms, this control requires you to have a systematic process for looking at any security-related “event” that occurs and deciding if it is serious enough to be escalated and treated as a security “incident.”

The goal of this guide is to translate the formal requirements of Control 5.25 into a practical, actionable framework specifically for businesses operating with complex AI workflows. We will demystify the control’s purpose and provide clear steps to implement it in a way that strengthens, rather than stifles, your operations.

The “No-BS” Translation: Decoding the Requirement

Let’s strip away the consultant-speak. This control is about triage. It asks: “Is the house burning down (Incident), or is the toaster just burning toast (Event)?”

The Auditor’s View (ISO 27001)The AI Company View (Reality)
“Information security events shall be assessed and it shall be decided if they are to be classified as information security incidents.”Don’t wake the CEO for a spam email. 1. Event: A failed login on the VPN. 2. Incident: 500 failed logins in 1 minute from a Russian IP address. You need a written rule that says when to panic.
“The results of the assessment and decision… shall be documented.”Write it down. If you decide not to report something, document why. “Assessed as false positive because user was on holiday.” If you don’t document the “No,” it looks like you ignored it.

The Business Case: Why This Actually Matters for AI Companies

Why should a founder care about “Event Assessment”? Because over-reacting kills productivity, and under-reacting kills the company.

The Sales Angle

Enterprise clients will ask: “What is your SLA for detecting and notifying us of a breach?” If your answer is “We investigate everything equally,” you are lying or inefficient. If your answer is “We use an automated triage system that escalates P0 events within 15 minutes,” you win the deal. A 5.25 is your triage logic.

The Risk Angle

The “Alert Fatigue” Breach: Target (the retailer) was breached because they ignored a malware alert. Their security team saw thousands of “Events” a day and missed the “Incident.” A proper 5.25 assessment framework filters the noise so you spot the signal.

DORA, NIS2 and AI Regulation: Thresholds Matter

Regulators demand you know the difference between a glitch and a crisis.

  • DORA (Article 18): Requires financial entities to classify incidents based on criteria like “number of users affected” and “data loss.” You must have a predefined “Classification Methodology” to comply.
  • NIS2 Directive: Mandates reporting of “Significant Incidents.” You can’t report what you haven’t assessed. You need a documented threshold for “Significant.”
  • EU AI Act: “Serious Incidents” involving AI models (e.g., causing physical harm or breach of fundamental rights) must be reported. Your assessment logic must include AI-specific impact criteria.

ISO 27001 Toolkit vs SaaS Platforms: The Assessment Trap

SaaS platforms give you alerts, but they don’t give you judgment. Here is why the ISO 27001 Toolkit is superior.

FeatureISO 27001 Toolkit (Hightable.io)Online SaaS Platform
The LogicDecision Matrix. A simple table: “If X happens, it is a P1.” Humans can read it and act.Black Box AI. The platform uses “AI” to flag anomalies. It flags your CEO logging in from holiday as a “Critical Incident” but misses the slow data exfiltration.
OwnershipYour Criteria. You define what matters.Generic Rules. Platforms apply generic rules that don’t understand your business context (e.g., that a spike in GPU usage is normal for training, not a cryptominer).
SimplicityChecklist. A PDF checklist for the on-call engineer.Dashboard Hell. Requires logging into a complex dashboard to “adjudicate” alerts, wasting time during a crisis.
CostOne-off fee. Pay once. Own the logic.Volume Pricing. Platforms often charge by the volume of logs ingested. Assesing more events costs you more money.

The Core Challenge: What’s an ‘Event’ vs. an ‘Incident’ in Your AI Environment?

ISO 27001 defines an event as any observable occurrence. An incident is an event that compromises your assets. For an AI company, these definitions take on unique meaning.

General Incident TypeWhat This Means for Your AI Company
Unauthorised accessUnauthorised access to your model training environments or cloud-based GPU clusters.
Unauthorised disclosureA leak of sensitive training datasets or proprietary model weights.
System outageA disruption of your critical inference APIs or algorithmic processes.
Unauthorised modificationA sophisticated model poisoning attack that alters your algorithm’s behaviour.

A Deep Dive into AI-Specific Risks

For an AI company, the “information assets” at risk extend far beyond traditional databases. Your core intellectual property is embedded in your models.

Protecting Your Training Data and Models

Your training datasets are valuable. A data leak isn’t just a privacy breach; it is an act of corporate espionage. Under Control 5.25, unusual data access patterns (e.g., “bulk download” of training data) must be assessed as a potential high-severity incident.

Ensuring Algorithmic Integrity

The integrity of your models reflects your credibility. An event like sudden degradation in model performance requires formal assessment. Is it drift, or is it a data poisoning attack?

Securing the AI Supply Chain

Your AI supply chain includes third-party datasets and pre-trained models. A security alert from a key supplier (e.g., Hugging Face vulnerability) must be assessed to determine its impact on your posture.

Your Action Plan: Implementing Control 5.25

Implementing Control 5.25 creates a repeatable playbook.

Establish Your Assessment Framework

Create documented criteria for categorising events. Use a simple formula: Impact x Urgency = Priority.

Priority LevelExampleResponse Time
Low (Event)Single blocked phishing email.24 Hours
Medium (Incident)Malware on one laptop.4 Hours
High (Incident)Ransomware on dev server.1 Hour
Critical (Crisis)Confirmed leak of proprietary model.Immediate

Document Everything Methodically

If it isn’t written down, it didn’t happen. Record the results of every significant assessment. “Event X assessed. Decision: No Incident. Rationale: False Positive.” This log is your audit trail.

The Evidence Locker: What the Auditor Needs to See

When the audit comes, prepare these artifacts:

  • Event Assessment Procedure (PDF): The document defining your criteria (P1-P4).
  • Incident Log (Excel/Ticket Export): A list of events assessed in the last 12 months. Ideally showing some “False Positives” to prove the system works.
  • Triage Tickets (Linear/Jira): A specific ticket showing the discussion: “Is this an incident?” -> “Yes, escalating.”

Common Pitfalls & Auditor Traps

Here are the top 3 ways AI companies fail this control:

  • The “Informal” Triage: The Dev team discusses a potential breach on Slack and decides it’s nothing. They don’t log a ticket. The auditor asks “How did you decide?” You have no evidence.
  • The “Undefined” Threshold: You treat every failed login as an incident. You have 10,000 open tickets. The auditor sees you are overwhelmed and failing to manage risk.
  • The “Missing” Feedback Loop: You classify an event as “Low” but it turns out to be “Critical.” You don’t update your criteria. You make the same mistake next time.

Handling Exceptions: The “Break Glass” Protocol

Sometimes you don’t have time to assess. If the server is encrypting itself, you act.

The “Assume Breach” Workflow:

  • Trigger: High-fidelity alert (e.g., AWS GuardDuty “UnauthorizedAccess”).
  • Action: Bypass assessment. Auto-escalate to P1 Incident.
  • Review: Conduct assessment after containment to confirm if it was a false positive.

The Process Layer: “The Standard Operating Procedure (SOP)”

How to operationalise A 5.25 using your existing stack (Slack, PagerDuty).

  • Step 1: Detection (Automated). Alert fires in #security-alerts channel.
  • Step 2: Initial Look (Manual). On-call engineer looks at the alert. Uses the “Triage Cheat Sheet” (PDF).
  • Step 3: Decision (Manual). Engineer clicks a button in Slack: “Declare Incident” or “Dismiss as False Positive.”
  • Step 4: Logging (Automated). If dismissed, the bot logs the reason to a Google Sheet for audit. If declared, it opens a Linear ticket.

By creating a structured process to assess and learn from threats, you are proactively protecting your critical assets. This is how a perceived administrative burden is transformed into a core pillar of business strategy. The High Table ISO 27001 Toolkit helps you document this logic in minutes.

ISO 27001 Annex A 5.25 FAQ for AI Companies

What is ISO 27001 Annex A 5.25 for AI companies?

ISO 27001 Annex A 5.25 mandates that organisations maintain information security across the entire supplier relationship lifecycle. For AI companies, this requires rigorous security vetting of critical third-party dependencies, including GPU cloud providers, data labelling services, and LLM API providers, ensuring that 100% of high-risk suppliers adhere to defined security requirements.

How do AI firms manage third-party GPU and compute suppliers under Annex A 5.25?

AI firms must manage compute suppliers by establishing clear security requirements for infrastructure-as-a-service (IaaS) and specialised GPU clusters. Key compliance steps include:

  • Documenting the “Right to Audit” within Service Level Agreements (SLAs).
  • Verifying data isolation protocols to prevent training data leakage in shared environments.
  • Implementing 24/7 monitoring for supplier availability to maintain AI model inference uptime.

Is a risk assessment required for LLM APIs to meet Annex A 5.25?

Yes, a formal risk assessment is mandatory for all LLM API providers to satisfy Annex A 5.25. Companies must evaluate how providers (such as OpenAI or Anthropic) handle prompt data, specifically checking for “opt-out” clauses regarding data training. Failure to assess these suppliers can lead to a non-conformity during an ISO 27001 certification audit.

What security clauses are required in AI supplier contracts for ISO 27001?

Contracts must include specific clauses for data protection, incident notification, and intellectual property. For AI-specific vendors, ensure contracts stipulate that the supplier cannot use your proprietary datasets for their own model fine-tuning without explicit written consent, maintaining a clear boundary for data sovereignty and Annex A 5.25 compliance.

How does ISO 27001 Annex A 5.25 align with the EU AI Act?

Annex A 5.25 provides the operational framework for the supply chain transparency required by the EU AI Act. While the EU AI Act focuses on the “traceability” of datasets, Annex A 5.25 ensures the “security” of the entities providing those datasets. Aligning both reduces compliance overhead by roughly 30% through shared documentation for vendor risk management.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top