How to Audit ISO 27001 Control 8.16: Monitoring Activities

ISO 27001 Annex A 8.16 audit checklist

Auditing ISO 27001 Annex A 8.16 Monitoring Activities is the technical verification of detection systems to identify unauthorised activities and anomalies. The Primary Implementation Requirement is continuous behavioural analysis across the infrastructure, ensuring the Business Benefit of rapid threat identification and minimized impact from security breaches.

ISO 27001 Annex A 8.16 Monitoring Activities Audit Checklist

This technical verification framework is designed for lead auditors to establishment the efficacy of real-time detection and behavioural analysis within the ISMS. Use this checklist to validate compliance with ISO 27001 Annex A 8.16.

1. Monitoring Scope and Objective Formalisation Verified

Verification Criteria: A documented monitoring strategy exists that defines what systems are monitored, the types of anomalies sought, and the required response times.

Required Evidence: Approved Security Monitoring Policy or SOC Operating Model document.

Pass/Fail Test: If the organisation cannot produce a formal document defining the baseline for “normal” behaviour vs “anomalous” behaviour, mark as Non-Compliant.

2. Continuous Behavioural Baseline Monitoring Confirmed

Verification Criteria: Technical systems are active in establishing and monitoring user and entity behaviour baselines (UEBA) to detect deviations from standard patterns.

Required Evidence: Dashboard screenshots from EDR, SIEM, or NDR tools showing active behavioural analytics profiles.

Pass/Fail Test: If monitoring is strictly signature-based and lacks the technical capability to detect pattern deviations (e.g., unusual data egress volume), mark as Non-Compliant.

3. Security Tooling Health and Uptime Validated

Verification Criteria: Monitoring agents and security tools (EDR, SIEM, IDS/IPS) are checked for health, ensuring they are active on 100% of the defined scope.

Required Evidence: Agent health reports or “Heartbeat” logs showing zero unmanaged or non-reporting critical assets.

Pass/Fail Test: If more than 5% of critical production servers are not currently reporting to the central monitoring system, mark as Non-Compliant.

4. Real-Time Alerting and Triage Workflow Verified

Verification Criteria: Monitoring activities trigger automated alerts that are immediately triaged by a designated responder or Security Operations Centre (SOC).

Required Evidence: Incident tickets or SOC logs showing the time between “Alert Triggered” and “Initial Triage” (MTTA).

Pass/Fail Test: If critical security alerts are found sitting unacknowledged in a queue for more than the policy-defined window (e.g., 1 hour), mark as Non-Compliant.

5. External Inbound/Outbound Traffic Monitoring Confirmed

Verification Criteria: Continuous monitoring of network traffic at the perimeter is active to detect unauthorised communication with known malicious IPs or command-and-control (C2) servers.

Required Evidence: Next-Gen Firewall (NGFW) or Intrusion Detection System (IDS) logs showing blocked or flagged external connections.

Pass/Fail Test: If the organisation does not monitor outbound HTTPS traffic for data exfiltration or C2 patterns, mark as Non-Compliant.

6. Privileged Account Activity Monitoring Validated

Verification Criteria: Enhanced monitoring is applied specifically to accounts with elevated privileges, detecting high-risk commands or out-of-hours access.

Required Evidence: PAM (Privileged Access Management) logs or SIEM filters specifically targeting Domain Admin/Root account actions.

Pass/Fail Test: If administrative actions can be taken in the production environment without triggering a specific monitoring event, mark as Non-Compliant.

7. Resource Performance and Availability Integration Verified

Verification Criteria: Monitoring includes system performance and availability metrics (CPU, RAM, Disk) to identify potential Denial of Service (DoS) or ransomware encryption events.

Required Evidence: Performance monitoring logs (e.g., CloudWatch, Zabbix, Datadog) cross-referenced with security incident triggers.

Pass/Fail Test: If a system crash due to resource exhaustion occurs without an automated alert being generated before the failure, mark as Non-Compliant.

8. False Positive Review and Tuning Records Identified

Verification Criteria: Regular reviews of monitoring alerts are conducted to tune out “noise” and improve the detection of genuine security threats.

Required Evidence: Weekly/Monthly SOC tuning logs or evidence of correlation rule updates in the SIEM.

Pass/Fail Test: If the monitoring system generates an unmanageable volume of false positives that results in “Alert Fatigue” and ignored events, mark as Non-Compliant.

9. Integrity of Monitoring Data Confirmed

Verification Criteria: The data generated by monitoring activities is protected against unauthorised modification or deletion to ensure it remains reliable for forensics.

Required Evidence: Restricted Access Control Lists (ACLs) for monitoring repositories and evidence of write-once storage or log hashing.

Pass/Fail Test: If a local system administrator can delete or modify the monitoring logs that record their own activities, mark as Non-Compliant.

10. Monitoring Effectiveness Reporting to Management Verified

Verification Criteria: Summaries of monitoring activities, including detected incidents and trends, are reviewed by senior management to ensure the strategy remains effective.

Required Evidence: Management Review Meeting (MRM) minutes or monthly security dashboard reports presented to leadership.

Pass/Fail Test: If there is no evidence that management has reviewed the detection efficacy or incident trends in the last 6 months, mark as Non-Compliant.

ISO 27001 Annex A 8.16 SaaS / GRC Platform Failure Checklist
Control Requirement The ‘Checkbox Compliance’ Trap The Reality Check
Anomaly Detection Tool records “Monitoring Active” because a SIEM is connected. Verify Thresholds. A GRC tool cannot tell if the alert threshold is set so high that it misses 90% of genuine attacks.
Behavioural Baselining Platform identifies “UEBA: Enabled” in the security suite. Test the Baseline. Demand proof of a detected deviation. If “Enabled” but never triggers, the baseline is likely irrelevant.
Coverage Validation Tool checks for “Agent Installed” on the asset list. Verify Blind Spots. Check if the monitoring covers unmanaged devices, IoT, or “Shadow IT” that isn’t in the GRC inventory.
Alert Triage GRC tool identifies “100 Alerts Resolved” as a success. Review the Remediation Quality. If alerts were closed with “No Action” just to meet a GRC deadline, the control has failed.
Forensic Integrity Platform assumes monitoring data is secure in the Cloud. Check Deletion Rights. If the global admin can “Clear Logs” without a second approver, the forensic chain is broken.
Network Monitoring Tool records that a “Firewall is present”. Verify East-West Traffic. GRC tools often ignore internal traffic monitoring; check for detection of lateral movement.
Management Review SaaS tool generates an automated PDF report. Verify Engagement. A report in a folder is not a review. Auditor must see evidence of management action based on the data.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top