ISO 27001 Annex A 5.7 Audit Checklist

ISO 27001 Annex A 5.7 audit checklist

Auditing ISO 27001 Annex A 5.7 Threat Intelligence validates the systematic collection and analysis of data regarding potential security attacks. This process confirms the Primary Implementation Requirement of contextualizing threat data to inform risk decisions and defensive actions. The Business Benefit empowers proactive defense strategies, reducing incident impact by anticipating adversary tactics rather than just reacting to them.

This technical verification tool is designed for lead auditors to confirm the active integration of threat data into the organisational risk and response framework. Use this checklist to validate compliance with ISO 27001 Annex A 5.7 (Threat Intelligence) by assessing the collection, analysis, and operational application of threat-related information.

1. Threat Intelligence Process Formalisation Verified

Verification Criteria: A documented process or procedure exists that defines how threat intelligence is collected, processed, and disseminated across the organisation.

Required Evidence: Approved Threat Intelligence Policy or Standard Operating Procedure (SOP) detailing the intelligence lifecycle.

Pass/Fail Test: If the organisation cannot produce a documented methodology for handling threat data, mark as Non-Compliant.

2. Identification of Diverse Intelligence Sources Confirmed

Verification Criteria: The organisation has identified and formalised both internal and external sources of threat data, encompassing tactical, operational, and strategic levels.

Required Evidence: A register of intelligence sources, including subscription records to ISACs, commercial feeds, or government alerts (e.g., NCSC).

Pass/Fail Test: If the organisation relies solely on a single, generic news feed without technical or sector-specific sources, mark as Non-Compliant.

3. Tactical Intelligence Implementation (IoCs) Validated

Verification Criteria: Evidence exists that Indicators of Compromise (IoCs) such as malicious IPs, file hashes, and URLs are actively ingested and used for detection.

Required Evidence: Configuration logs from SIEM, EDR, or Firewall showing the automated or manual ingestion of threat feeds.

Pass/Fail Test: If IoCs are collected but not actively applied to blocking or monitoring tools, mark as Non-Compliant.

4. Operational Intelligence Application (TTPs) Verified

Verification Criteria: The organisation demonstrates the use of operational intelligence to understand the Tactics, Techniques, and Procedures (TTPs) of relevant threat actors.

Required Evidence: Internal threat reports or SOC documentation that maps observed behaviours to frameworks like MITRE ATT&CK.

Pass/Fail Test: If the threat intelligence function cannot describe the TTPs of the top three threat actors relevant to their sector, mark as Non-Compliant.

5. Strategic Intelligence for Executive Decision-Making Confirmed

Verification Criteria: High-level threat landscape trends are communicated to senior management to inform long-term security strategy and investment.

Required Evidence: Board-level security reports or Management Review Meeting (MRM) minutes showing discussions on evolving global threat trends.

Pass/Fail Test: If threat intelligence is treated purely as a technical “IT issue” with no executive-level visibility or strategic reporting, mark as Non-Compliant.

6. Intelligence Analysis and Relevance Vetting Validated

Verification Criteria: Collected data is vetted for relevance to the organisation’s specific technical environment and business context before action is taken.

Required Evidence: Analysis logs or ticket comments within a Threat Intelligence Platform (TIP) or Incident Management tool showing the dismissal of irrelevant alerts.

Pass/Fail Test: If every raw alert from a feed is treated as a high priority without context-based analysis, mark as Non-Compliant.

7. Integration with Risk Management Framework Verified

Verification Criteria: Insights gained from threat intelligence are used to update the Risk Register and adjust the likelihood of specific security scenarios.

Required Evidence: Updated Risk Assessment records citing specific threat intelligence reports as the justification for changed risk scores.

Pass/Fail Test: If the Risk Register is static and does not reflect changes in the real-world threat landscape, mark as Non-Compliant.

8. Actionable Output and Mitigation Records Present

Verification Criteria: The threat intelligence process produces tangible outputs that lead to defensive improvements or vulnerability patching prioritisation.

Required Evidence: Change requests, patching logs, or firewall rule updates that specifically reference a threat advisory.

Pass/Fail Test: If there is no audit trail showing a security control was modified in response to an intelligence alert, mark as Non-Compliant.

9. Internal Threat Data Contribution Confirmed

Verification Criteria: The organisation uses its own internal incident data and system logs as a source of “Internal Threat Intelligence” to identify patterns.

Required Evidence: Post-Incident Reviews (PIRs) or trend analysis reports derived from internal ticket data over the last 12 months.

Pass/Fail Test: If the organisation only looks at external threats and ignores patterns within their own historical incident data, mark as Non-Compliant.

10. Intelligence Dissemination and Roles Verified

Verification Criteria: Responsibilities for threat intelligence are assigned to specific roles, and information is shared with those who need it in a timely manner.

Required Evidence: Job descriptions or RACI matrix naming TI owners; evidence of internal alerts sent to System Administrators or Developers.

Pass/Fail Test: If critical threat information is received by the security team but not communicated to the technical teams responsible for remediation, mark as Non-Compliant.
ISO 27001 Annex A 5.7 SaaS / GRC Platform Failure Checklist
Control Requirement The ‘Checkbox Compliance’ Trap The Reality Check
Data Collection GRC tool identifies an active API connection to a “Free Threat Feed”. Verify if the feed is relevant to the organisation’s tech stack (e.g., Azure vs. AWS) or sector.
Information Analysis SaaS tool records “1,000 threats ingested this month” as a success metric. Demand evidence of human or advanced AI vetting to filter out noise and false positives.
Strategic Intel Uploading a generic “Global Security PDF” to the GRC evidence folder. Look for evidence that the Board *actually* discussed these trends in relation to their own budget.
Actionable Output Tool shows “Intelligence” as a separate silo from “Incident Management”. Trace an alert from the feed to a closed Jira ticket where a specific control was hardened.
Internal Intel GRC platform checks if an incident log exists. Verify if incident patterns (e.g., repeated brute force on one IP) are being formalised as internal intel.
Source Review Automated “Last Updated” timestamp on the source list metadata. Verify if outdated or low-fidelity sources are being pruned from the intelligence ecosystem.
Roles Generic assignment to “IT Admin”. Verify the individual has the skills to interpret a threat advisory (e.g., understanding CVE scoring).

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top