How to Audit ISO 27001 Control 8.28: Secure Coding

ISO 27001 Annex A 8.28 audit checklist

Auditing ISO 27001 Annex A 8.28 Secure Coding is the technical verification of security principles embedded within the software development lifecycle. The Primary Implementation Requirement mandates SAST/SCA integration and peer reviews, providing the Business Benefit of mitigating OWASP Top 10 vulnerabilities and ensuring software integrity throughout the build.

ISO 27001 Annex A 8.28 Secure Coding Audit Checklist

This technical verification tool is designed for lead auditors to establish the efficacy of security controls within the software development process. Use this checklist to validate compliance with ISO 27001 Annex A 8.28.

1. Secure Coding Governance and Policy Verified

Verification Criteria: A formalised secure coding policy exists that mandates the use of specific security standards (e.g., OWASP, CERT, or MISRA) across all development projects.

Required Evidence: Approved Secure Coding Policy or Developer Handbook with explicit version control and management authorisation.

Pass/Fail Test: If the organisation cannot produce a documented standard defining mandatory secure coding practices for its technology stack, mark as Non-Compliant.

2. Developer Competency and Security Training Confirmed

Verification Criteria: All personnel involved in writing or reviewing code have received documented training on secure coding techniques and the identification of common vulnerabilities.

Required Evidence: Training completion logs, certificates, or workshop attendance records dated within the last 12 months.

Pass/Fail Test: If active developers have not undergone secure coding training relative to their specific language (e.g., Java, Python, C#), mark as Non-Compliant.

3. Static Application Security Testing (SAST) Integration Validated

Verification Criteria: Automated SAST tools are integrated into the development lifecycle to scan source code for vulnerabilities prior to compilation or deployment.

Required Evidence: Scan reports from tools like SonarQube, Snyk, or Checkmarx showing identified flaws and remediation status.

Pass/Fail Test: If source code is promoted to a production-ready branch without an automated SAST scan being executed, mark as Non-Compliant.

4. Third-Party Library and Dependency Vetting Confirmed

Verification Criteria: Technical mechanisms (Software Composition Analysis – SCA) are utilised to identify and manage security risks in third-party libraries and open-source components.

Required Evidence: Dependency scan logs or a Software Bill of Materials (SBOM) showing no active “Critical” or “High” severity vulnerabilities.

Pass/Fail Test: If the organisation cannot produce an inventory of third-party libraries or if production code contains unmitigated “Critical” CVEs in dependencies, mark as Non-Compliant.

5. Secure Handling of Sensitive Data in Code Verified

Verification Criteria: Source code is free from hardcoded secrets, API keys, credentials, or plain-text PII.

Required Evidence: Secret-scanning tool logs (e.g., GitHub Secret Scanning or TruffleHog) showing zero active high-risk secrets in repositories.

Pass/Fail Test: If a manual or automated check identifies an active API key or database password hardcoded in the repository, mark as Non-Compliant.

6. Peer Code Review Security Focus Validated

Verification Criteria: A mandatory peer review process is enforced where security-specific checks (e.g., input sanitisation, output encoding) are documented as part of the approval.

Required Evidence: Pull Request (PR) logs or code review checklists showing explicit sign-off on security considerations by an independent reviewer.

Pass/Fail Test: If code can be merged into a production-ready branch without a documented security-focused review by someone other than the author, mark as Non-Compliant.

7. Use of Proven Cryptographic Libraries Confirmed

Verification Criteria: The organisation mandates the use of industry-standard, well-vetted cryptographic libraries rather than custom-developed or deprecated encryption methods.

Required Evidence: Code review of cryptographic modules or architectural standards citing the use of libraries like OpenSSL, Bouncy Castle, or native cloud KMS APIs.

Pass/Fail Test: If the codebase identifies “home-grown” encryption algorithms or deprecated protocols (e.g., MD5 for hashing passwords), mark as Non-Compliant.

8. Application Error Handling and Information Leakage Verified

Verification Criteria: Code is engineered to handle errors gracefully, ensuring that stack traces or sensitive system information are never revealed in public-facing error messages.

Required Evidence: Exception handling code blocks or UI screenshots showing generic “Internal Error” messages for failed requests.

Pass/Fail Test: If an application in production reveals database schema details or server file paths within a browser-based error message, mark as Non-Compliant.

9. Developer Local Environment Security Validated

Verification Criteria: Local development environments (IDEs/Workstations) are hardened and restricted to prevent the unauthorised extraction of source code or ingress of malware.

Required Evidence: Endpoint management (MDM) reports confirming disk encryption and AV/EDR presence on all developer machines.

Pass/Fail Test: If a developer workstation lacks full disk encryption or active real-time malware protection, mark as Non-Compliant.

10. Periodic In-Depth Penetration Testing Recorded

Verification Criteria: Dynamic testing (DAST or Pentesting) is conducted at regular intervals to verify that secure coding practices have effectively mitigated runtime vulnerabilities.

Required Evidence: Signed Penetration Test reports and a corresponding Remediation Log showing closure of high-risk findings.

Pass/Fail Test: If the application has not undergone a formal security test following its last major architectural change or within the last 12 months, mark as Non-Compliant.

Control RequirementThe ‘Checkbox Compliance’ TrapThe Reality Check
Standard SelectionTool checks if a “Coding Policy” is uploaded.Verify Technical Application. Does the code actually use the OWASP Top 10 mitigations, or is the policy just “shelfware”?
Vulnerability ScanningPlatform identifies “SAST Tool is Integrated.”Verify Build Blockers. If the scanner finds 100 “High” flaws but the CI/CD pipeline deploys anyway, the control is a total failure.
Secret ManagementTool confirms “Vault is in use.”Check the Repositories. A vault is useless if developers are still hardcoding backup keys in .env files.
Dependency VettingGRC tool records “Dependabot is active.”Audit Remediation Speed. Check for CVEs older than 30 days. Active alerts are not compliance; patching is compliance.
Code ReviewTool records that PRs are “Required.”Verify Independent Review. Does the system allow a developer to approve their own PR from a secondary “admin” account?
Developer TrainingTool checks for a “Training Budget.”Verify Knowledge Retention. Check the bug tracker. If the same XSS flaws repeat monthly, the training has failed.
Runtime ProtectionPlatform identifies “WAF is active.”Verify Code Hygiene. A WAF is a band-aid. Annex A 8.28 requires the code itself to be secure, not just the firewall in front of it.

About the author

Stuart Barker
🎓 MSc Security 🛡️ Lead Auditor 30+ Years Exp 🏢 Ex-GE Leader

Stuart Barker

ISO 27001 Ninja

Stuart Barker is a veteran practitioner with over 30 years of experience in systems security and risk management. Holding an MSc in Software and Systems Security, he combines academic rigor with extensive operational experience, including a decade leading Data Governance for General Electric (GE).

As a qualified ISO 27001 Lead Auditor, Stuart possesses distinct insight into the specific evidence standards required by certification bodies. His toolkits represent an auditor-verified methodology designed to minimise operational friction while guaranteeing compliance.

Shopping Basket
Scroll to Top