If you are building the next generation of Large Language Models (LLMs) or deploying computer vision agents, “talking to the police” is probably low on your priority list. You are worried about inference costs, model bias, and finding enough GPUs. However, if you are pursuing ISO 27001 certification, ISO 27001 Annex A 5.5: Contact with Authorities is a control you cannot ignore.
For most traditional businesses, this control is a boring list of phone numbers. For an AI company, in the era of the EU AI Act and intensifying data privacy scrutiny, this control is a strategic minefield. Here is how to implement it effectively without slowing down your innovation.
Table of contents
What is Annex A 5.5? (It’s Not Just 911)
The requirement of Annex A 5.5 is deceptively simple: The organization must establish and maintain contact with relevant authorities.
The keyword here is relevant. If you are an AI company processing millions of user interactions, “relevant” doesn’t just mean the local fire department. It means the people who can shut you down or fine you if your model leaks training data or violates a safety statute.
The goal is preparedness. When a crisis hits, whether it’s a ransomware attack locking up your training data or a regulatory inquiry into your data scraping practices, you shouldn’t be scrambling to find out who to call. You need a pre-approved communication channel ready to go.
The “Authorities” Landscape for AI Companies
This is where AI companies differ from a standard bakery or consultancy. Your list of authorities is going to be longer and more complex. When implementing this, you need to categorize “authorities” into three buckets:
1. Data Protection and AI Regulators
This is your biggest risk area. If your model accidentally reveals PII (Personally Identifiable Information) from its training set, you have a data breach. You need the direct contact details for:
- The Information Commissioner’s Office (ICO) or your local Data Protection Authority (DPA).
- AI Safety Institutes: As new regulations like the EU AI Act come online, specific bodies are being formed to oversee AI safety. You need to know who they are.
2. Law Enforcement and Cyber Units
If someone steals your proprietary model weights, that is intellectual property theft. If you are hit by a state-sponsored attack, local police can’t help you. You need contacts for:
- Regional Cyber Crime Units.
- Federal agencies handling IP theft or critical infrastructure attacks.
3. Operational Authorities
Who keeps your GPUs running? While technically “utilities,” maintaining contact with your cloud provider’s emergency response team (AWS, Azure, GCP) or your data center’s security desk is often grouped here for practical incident response.
How to Implement This Without the Headache
Implementation doesn’t mean having a red phone on your desk. It means having a document. Here is the practical way to satisfy the auditor:
Step 1: Build the Register
Create a simple table in your Information Security Management System (ISMS). It needs to list the Authority Name, Contact Details (phone/email/portal), and the Reason for Contact.
Pro Tip: Don’t just list “The Police.” List “Cyber Fraud Division – Non-Emergency Line.” Specificity shows competence.
Step 2: Define the “Trigger”
This is crucial for AI startups. You don’t want a junior developer calling the Data Protection Regulator because they found a minor bug. You need a clear protocol: “If X happens, the CISO contacts Y.”
This ensures that communication with authorities is managed, professional, and legal vetted.
Step 3: Keep it Updated
Regulators change their reporting portals constantly. A broken link in your “Contact with Authorities” list is an easy non-conformity for an auditor to find. Schedule a 6-month review to click the links and verify the numbers.
If you don’t want to build this register from scratch, Hightable.io provides excellent ISO 27001 toolkits that include pre-formatted templates for Contact with Authorities. Using a proven template can save you time and ensure you aren’t missing standard regulatory bodies required for compliance.
Common Mistakes AI Companies Make
Mistake 1: Ignoring International Regulators.
If you are an AI company in the US but you scrape data from Europe, you are subject to GDPR. Do you have the contact info for the relevant EU authorities? If not, you are non-compliant.
Mistake 2: Confusing “Authorities” with “Special Interest Groups”.
Annex A 5.5 is about people with legal power (Police, Regulators). Annex A 5.6 is about special interest groups (AI Ethics boards, industry forums). Keep them separate in your documentation.
Conclusion
For an AI company, ISO 27001 Annex A 5.5 is your safety net. It ensures that when the complex world of AI regulation intersects with a security incident, you aren’t caught off guard. By mapping out your relevant authorities now, you protect your company’s reputation and ensure you can navigate a crisis with speed and precision.
Don’t let bureaucracy slow down your training runs. Get your contacts sorted, use a solid template like those from Hightable.io, and get back to building the future.