Source-code review by humans who can read the code.
Manual review of security-critical code where commercial static-analysis tools do not have enough context. Custom analyzers built for the customer's languages and frameworks where off-the-shelf tools cannot reach. Cryptography implementation review. Build-pipeline and supply-chain review.
Read the code the way an attacker would.
Static-analysis tools find a useful subset of bugs — the patterns commercial vendors have rules for. The bugs that get attackers to production — broken business logic, deserialisation in the wrong direction, time-of-check time-of-use windows specific to the customer's framework, custom-protocol parsing flaws — are not in any commercial rule pack.
We read security-critical code paths manually. We read the framework the customer wrote on top of. We trace the data from where it enters the system to where it lands in storage or executes. We find what the scanners do not.
Where the customer's stack is something a commercial tool does not cover — a proprietary parser, a private DSL, a vendor-specific runtime — we build the analyzer alongside the engagement.
What the work covers.
Security-critical paths read by an engineer who can hold the system in their head. Authentication, authorisation, session, payment, signing, parsing, IPC.
Custom analyzers for the customer's languages, frameworks, and idioms — Semgrep / CodeQL rule packs and bespoke tools where commercial coverage stops.
Protocol implementations, custom primitives, key-management code, signing and verification paths, RNG usage. Where the customer is shipping their own crypto.
CI/CD trust boundaries, dependency review, signing-key custody, artifact integrity, reproducible-build review, third-party-package risk.
C / C++ / Rust review, memory-safety boundaries, embedded-system code, firmware components, kernel-adjacent code.
Go, Rust, C, C++, Python, TypeScript / JavaScript, Java / Kotlin, Swift / Objective-C, Solidity, plus customer-specific DSLs.
The shape we don't ship.
- No SAST output as deliverable
Re-formatted Semgrep / CodeQL / SonarQube findings are not a code review. We use those tools as triage, not as the work product.
- No "AI-driven code review" deliverable
An LLM scanning a codebase produces noise that the customer cannot triage. We do use local LLMs in-house — to summarise large modules, generate review checklists per file, scaffold custom analyzer rules, and pattern-match against our internal corpus. The reviewing engineer is the deliverable, not the model.
- No code leaves the engagement
Customer source is reviewed in environments that match the customer's residency requirements. No commercial SaaS in the data path. Sovereign deployments, where required, run inside the customer's infrastructure.
Three forms of engagement.
Targeted review of security-critical paths. The customer identifies the load-bearing modules — authentication, signing, payment, parsing. We read those manually and report.
Custom-analyzer build alongside review. Where the customer's stack falls outside commercial-tool coverage, we build the analyzer and the review at the same time. The analyzer stays with the customer.
Continuing review in CI. The customer keeps a running engagement: every PR-review of security-critical paths gets a senior engineer's eye, with a quarterly written summary.
Duration depends on the codebase size and the depth of coverage. We scope against the actual code.