ISO 27001:2022 Annex A 5.5 Contact with authorities for AI Companies

ISO 27001 Annex A 5.5 for AI Companies

ISO 27001 Annex A 5.5 Contact with Authorities is a security control that mandates maintaining pre-established communication channels with legal and regulatory bodies. The primary implementation requirement involves building a maintained register of emergency contacts, providing the business benefit of reduced regulatory fines and enhanced trust during critical incidents.

If you are building the next generation of Large Language Models (LLMs) or deploying computer vision agents, “talking to the police” is probably low on your priority list. You are worried about inference costs, model bias, and finding enough GPUs. However, if you are pursuing ISO 27001 certification, ISO 27001 Annex A 5.5 Contact with Authorities is a control you cannot ignore.

For most traditional businesses, this control is a boring list of phone numbers. For an AI company, in the era of the EU AI Act and intensifying data privacy scrutiny, this control is a strategic minefield. Here is how to implement it effectively without slowing down your innovation.

The “No-BS” Translation: Decoding the Requirement

Let’s strip away the consultant-speak and look at what this actually means for a 25-year-old DevOps engineer trying to keep the training cluster online.

The Auditor’s View (ISO 27001)The AI Company View (Reality)
“The organisation shall establish and maintain contact with relevant authorities.”Who do we call when the house is on fire? If we get ransomware, who is the FBI contact? If we leak data, who is the ICO contact?
“Contact with authorities shall be timely.”If we wait 72 hours to tell the regulator about a breach because we couldn’t find the phone number, we get fined 4% of global turnover.
“Information exchanged shall be limited to relevant information.”Don’t send the police your entire model weights or source code just because they asked. Have a legal filter before hitting “Send.”
ISO 27001 Toolkit

The Business Case: Why This Actually Matters for AI Companies

Why should a founder care about a list of phone numbers? Because in the AI world, silence is expensive.

The Sales Angle

Enterprise clients are terrified of regulatory blowback. In security questionnaires, they will ask: “Do you have a defined incident response process that includes regulatory notification?” They want to know that if you leak their data, you know exactly who to tell and when, so they don’t get sued. A robust A 5.5 control proves you are a mature partner, not a risky startup.

The Risk Angle

Regulatory Fines: Under GDPR and the EU AI Act, fines are often calculated based on “cooperation” and “timeliness.” If you scramble for 48 hours just to find the right email address for the Data Protection Authority, you look negligent. Being prepared reduces the fine.

DORA, NIS2 and AI Regulation: The Reporting Clock is Ticking

ISO 27001 A 5.5 is the foundation for complying with the new wave of heavy-hitting EU laws.

  • DORA (Digital Operational Resilience Act): Financial entities and their critical ICT providers (that’s you, AI fintech vendors) must report major incidents within 4 hours of detection. You cannot meet a 4-hour deadline if you don’t have the “Competent Authority” contact details saved in your register.
  • NIS2 Directive: Mandates reporting to the CSIRT (Computer Security Incident Response Team) within 24 hours for an “Early Warning.” A 5.5 requires you to identify who your local CSIRT is before the incident happens.
  • EU AI Act: Providers of GPAI (General Purpose AI) models must report serious incidents to the AI Office. This control ensures you have that specific contact path defined.

ISO 27001 Toolkit vs SaaS Platforms: The Contact List Trap

Why would you pay a monthly subscription to store a list of phone numbers? Here is the reality of using a SaaS platform for Annex A 5.5.

FeatureISO 27001 Toolkit (Hightable.io)Online SaaS Platform
OwnershipYou keep the file. It’s a Word/Excel document on your secure server.You rent access. If you stop paying, you lose the list of people who can save your business.
AvailabilityOffline Access. When the internet goes down or you are under DDoS attack, you can still open your local Excel file to find the ISP’s number.Single Point of Failure. If the SaaS platform is down (or you are locked out), you cannot access the emergency contacts you need to resolve the incident.
SimplicityNo training needed. It is a list. Everyone knows how to read a list.Over-engineered. Requires logging in, navigating menus, and hoping the “compliance module” is updated.
CostOne-off fee. Pay once, use forever.Monthly drain. Paying $15k/year to host a glorified phone book is bad business.

What is Annex A 5.5? (It’s Not Just 911)

The requirement of Annex A 5.5 is deceptively simple: The organisation must establish and maintain contact with relevant authorities.

The keyword here is relevant. If you are an AI company processing millions of user interactions, “relevant” doesn’t just mean the local fire department. It means the people who can shut you down or fine you if your model leaks training data or violates a safety statute.

The goal is preparedness. When a crisis hits, whether it’s a ransomware attack locking up your training data or a regulatory inquiry into your data scraping practices, you shouldn’t be scrambling to find out who to call. You need a pre-approved communication channel ready to go.

The “Authorities” Landscape for AI Companies

This is where AI companies differ from a standard bakery or consultancy. Your list of authorities is going to be longer and more complex. When implementing this, you need to categorize “authorities” into three buckets:

1. Data Protection and AI Regulators

This is your biggest risk area. If your model accidentally reveals PII (Personally Identifiable Information) from its training set, you have a data breach. You need the direct contact details for:

  • The Information Commissioner’s Office (ICO) or your local Data Protection Authority (DPA).
  • AI Safety Institutes: As new regulations like the EU AI Act come online, specific bodies are being formed to oversee AI safety. You need to know who they are.

2. Law Enforcement and Cyber Units

If someone steals your proprietary model weights, that is intellectual property theft. If you are hit by a state-sponsored attack, local police can’t help you. You need contacts for:

  • Regional Cyber Crime Units (e.g., Action Fraud in the UK).
  • Federal agencies handling IP theft or critical infrastructure attacks.

3. Operational Authorities

Who keeps your GPUs running? While technically “utilities,” maintaining contact with your cloud provider’s emergency response team (AWS, Azure, GCP) or your data center’s security desk is often grouped here for practical incident response.

How to Implement This Without the Headache

Implementation doesn’t mean having a red phone on your desk. It means having a document. Here is the practical way to satisfy the auditor:

  • Step 1: Build the Register. Create a simple table in your Information Security Management System (ISMS). It needs to list the Authority Name, Contact Details (phone/email/portal), and the Reason for Contact.
  • Step 2: Define the “Trigger”. This is crucial for AI startups. You don’t want a junior developer calling the Data Protection Regulator because they found a minor bug. You need a clear protocol: “If X happens, the CISO contacts Y.”
  • Step 3: Keep it Updated. Regulators change their reporting portals constantly. A broken link in your “Contact with Authorities” list is an easy non-conformity for an auditor to find. Schedule a 6-month review to click the links and verify the numbers.

If you don’t want to build this register from scratch, the ISO 27001 Toolkit provides excellent toolkits that include pre-formatted templates for Contact with Authorities. Using a proven template can save you time and ensure you aren’t missing standard requirements required for compliance.

The Evidence Locker: What the Auditor Needs to See

When I audit you, I don’t just want to see a policy. I want to see proof. Provide these artifacts:

  • The Contact Register (Excel/PDF): A list of names, numbers, and roles. It must be dated.
  • Maintenance Log: A simple record showing “Checked by CISO on [Date]. All links valid.”
  • Incident Response Plan: A section in your IRP that explicitly references when to use this list.

Common Pitfalls & Auditor Traps

Here are the top 3 ways AI companies fail this control, specifically when relying on SaaS automation:

  • The “US Default” Error: You use a US-based SaaS platform that auto-populates “FBI” and “FTC” as your authorities. But you are a UK company. You need the ICO and NCSC. Instant non-conformity for irrelevance.
  • The “Broken API” Error: The SaaS tool links to a reporting portal that moved 6 months ago. Because you trusted the tool and didn’t check manually, your incident response process is broken.
  • The “Special Interest” Confusion: You listed “OpenAI” or “Hugging Face” as an authority. They are vendors or communities (Annex A 5.6), not legal authorities. Keep them separate.

Handling Exceptions: The “Break Glass” Protocol

What if the CISO is on a plane and the FBI is at the door? You need a protocol for when standard authority contact procedures fail.

The Emergency Workflow:

  • Trigger: CISO/CEO unavailable during a P0 crisis involving law enforcement.
  • Designated Delegate: Explicitly name the “Head of Engineering” or “Legal Counsel” as the backup contact.
  • Access: Ensure this delegate has physical or digital access to the Contact Register (another reason why a file in a shared drive is better than a CISO-only SaaS login).

The Process Layer: “The Standard Operating Procedure (SOP)”

How to operationalise A 5.5 using your existing stack (Google Workspace, Slack).

  • Step 1: Storage (Manual). Save the “Contact with Authorities.xlsx” in a restricted Google Drive folder: “Legal & Compliance > Emergency Contacts”.
  • Step 2: Access Control (Automated). Use Google Workspace groups to ensure only “C-Suite” and “Legal” have access to this folder.
  • Step 3: Review Reminder (Automated). Set a recurring Calendar invite every 6 months for the CISO: “Verify Authority Contacts.”
  • Step 4: Incident Integration (Manual). In your Slack incident channel (#incident-fire), pin a link to the “Emergency Contacts” document so it is available immediately during a crisis.

For an AI company, ISO 27001 Annex A 5.5 is your safety net. It ensures that when the complex world of AI regulation intersects with a security incident, you aren’t caught off guard. By mapping out your relevant authorities now, you protect your company’s reputation and ensure you can navigate a crisis with speed and precision.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top