How to Audit ISO 27001 Control 8.26: Application Security Requirements

ISO 27001 Annex A 8.26 audit checklist

Auditing ISO 27001 Annex A 8.26 Application Security Requirements is the technical verification of security specifications within software lifecycles. The Primary Implementation Requirement is the formalisation of security design and testing protocols, providing the Business Benefit of resilient software applications that mitigate injection vulnerabilities and data leakage risks.

ISO 27001 Annex A 8.26 Application Security Requirements Audit Checklist

This technical verification tool is designed for lead auditors to establish the security integrity of bespoke and off-the-shelf software applications. Use this checklist to validate compliance with ISO 27001 Annex A 8.26.

1. Information Security Requirements Formalisation Verified

Verification Criteria: Specific security requirements for applications are documented and approved during the initial specification and design phases.

Required Evidence: Software Requirement Specifications (SRS) or Design Documents containing explicit security-functional and non-functional requirements.

Pass/Fail Test: If an application was developed or procured without a formalised list of security requirements (e.g., encryption, session handling), mark as Non-Compliant.

2. Authentication and Access Control Requirements Confirmed

Verification Criteria: Requirements for robust authentication (including MFA) and granular role-based access control (RBAC) are defined for all application user tiers.

Required Evidence: Application architecture diagrams or access control matrices specifying required login factors and permission levels.

Pass/Fail Test: If the application design allows for single-factor authentication for administrative functions or lacks granular permission logic, mark as Non-Compliant.

3. Data Input Sanitisation and Validation Validated

Verification Criteria: Technical requirements for the validation and sanitisation of all user-supplied data are documented to prevent common injection attacks.

Required Evidence: Secure Coding Standard or API Documentation specifying input validation parameters and regex filters.

Pass/Fail Test: If there is no documented requirement for input sanitisation (e.g., against SQLi or XSS) in the application design, mark as Non-Compliant.

4. Cryptographic Protection Requirements Verified

Verification Criteria: Requirements for protecting sensitive data at rest and in transit within the application are defined, specifying algorithms and key management rules.

Required Evidence: Data Classification Policy and Application Security Design documents citing TLS versions and AES-256 (or equivalent) standards.

Pass/Fail Test: If the application design does not specify the encryption protocols for sensitive data fields or database volumes, mark as Non-Compliant.

5. Application Logging and Audit Trail Requirements Confirmed

Verification Criteria: Requirements for generating, protecting, and reviewing application-level audit logs are explicitly defined.

Required Evidence: Logging Standard or Technical Specification identifying mandatory event types to be logged (e.g., login failure, data modification).

Pass/Fail Test: If the application is incapable of logging user activity or if logging requirements were never specified, mark as Non-Compliant.

6. Error Handling and Information Leakage Prevention Validated

Verification Criteria: Requirements are defined to ensure that application error messages do not reveal sensitive system or technical information to end-users.

Required Evidence: UI/UX Design specs or Error Handling Procedures specifying generic user-facing messages vs. detailed internal logs.

Pass/Fail Test: If an application in production reveals stack traces, database versions, or server paths in public error messages, mark as Non-Compliant.

7. Secure Session Management Implementation Verified

Verification Criteria: Technical requirements for session timeouts, concurrent login limits, and secure token handling (e.g., HttpOnly/Secure flags) are documented.

Required Evidence: Web Application Firewall (WAF) configuration logs or session management policy documentation.

Pass/Fail Test: If session tokens are found in URL parameters or lack a mandatory expiration timeout in the application configuration, mark as Non-Compliant.

8. Supply Chain and Third-Party Component Security Confirmed

Verification Criteria: Requirements exist to verify the security of integrated third-party components, APIs, and libraries used by the application.

Required Evidence: Software Bill of Materials (SBOM) or Third-Party Security Assessment reports for integrated services.

Pass/Fail Test: If the organisation cannot produce a list of third-party libraries used in the application or has no record of assessing their security, mark as Non-Compliant.

9. Availability and Resilience Requirements Validated

Verification Criteria: Technical requirements for high availability, backup frequency, and disaster recovery are defined for the application service.

Required Evidence: Business Continuity Plan (BCP) or Service Level Agreement (SLA) specifying target RTO and RPO for the application.

Pass/Fail Test: If an application identified as ‘Critical’ has no documented requirement for redundancy or failover testing, mark as Non-Compliant.

10. Security Testing and Certification Records Present

Verification Criteria: The application has undergone formal security testing (e.g., Pentest or DAST) to verify that the identified requirements have been met.

Required Evidence: Signed Vulnerability Assessment reports or Penetration Testing Remediation logs showing all critical findings closed.

Pass/Fail Test: If a production application has not undergone a security test in the last 12 months or since its last major release, mark as Non-Compliant.

Control RequirementThe ‘Checkbox Compliance’ TrapThe Reality Check
Requirement AnalysisTool checks if “Requirement Document” exists.Verify Granularity. Check if security is a dedicated chapter or just a “secure the system” bullet point.
Secure DesignPlatform identifies that “Devs are trained.”Verify Architectural Review. Demand the Threat Model or Data Flow Diagram (DFD) for the latest release.
Dependency ManagementTool checks if “GitHub Dependabot” is ‘Enabled’.Verify Remediation. A ‘green’ status on a GRC tool often ignores the 50 “high” alerts the devs never patched.
Logging EfficacyPlatform confirms “Application Logs are sent to SIEM.”Verify Attribute Quality. Do the logs identify the Human user or just the generic “system” account?
Testing CycleTool records “Pentest Done” based on a quote/invoice.Verify Closure. A pentest report with 20 open findings is a failure, regardless of what the GRC tool says.
Third-Party APIsTool assumes SaaS APIs are inherently secure.Check Permission Scoping. If an API key has ‘Owner’ rights when it only needs ‘Read’, the control is a failure.
ResilienceGRC platform identifies “Backups are active.”Verify Restore Logic. Can the application be rebuilt from scratch using only the requirements and backups?

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top