Yinkozi
Contact
services / penetration testing

Penetration testing as research — not a checklist.

We test the way an attacker actually works: manually, with hypothesis, with deep reading of the system, with custom tooling we build for the customer's stack. No OWASP checkbox runs. No automated SaaS scans dressed up as a service. The methodology stays human — we use local LLMs to accelerate our tooling, not to drive the work.

01 / how the work runs

Read the system. Build the tooling. Find what nobody is looking for.

Every engagement starts with reading. We read the customer's documentation, the customer's source code where it is shared, the customer's deployment topology. We learn the system the way a senior engineer at the customer would.

Then we form hypotheses about where the system is vulnerable, given its architecture and threat model — not given a generic vulnerability list. The hypothesis drives the tooling.

Most of our engagements include custom tooling built specifically for the engagement: parsers for proprietary protocols, instrumentation for non-standard runtimes, harnesses for fuzzing custom code paths. The tooling becomes part of what the customer keeps.

Pentest methodology — surface to findings, with custom tooling and feedback loop 01 Customer surface Code · Architecture Infrastructure · Docs 02 Reading & hypothesis Threat-model specific to system 03 Custom tooling Parsers · Harnesses Analyzers · Fuzzers Built per engagement 04 Manual exploitation Senior engineers Reproducible PoC 05 Findings + tooling delivered Engineer-grade report Tooling kept by customer findings inform new tooling — what this is not surface → scanner → report. A commodity pentest is one arrow. Ours is five steps with a feedback loop.
02 / what we don't do

The line we hold.

The market for security testing is full of services that look like ours from the outside. The differences matter at the moment of finding something an attacker would actually exploit.

  • No checklist-driven testing

    We don't deliver an OWASP-Top-10 ticked-off PDF. The OWASP framework is fine as a glossary; it is a poor substitute for thinking about this customer's threat model.

  • No automated SaaS scans dressed as a service

    Burp, Qualys, Nessus, ZAP — we use commercial tools where they help, but a scanner output is the start of an investigation, not the deliverable. We do not bill expert-rate hours for tool output.

  • Local LLMs as tooling, not as methodology

    We do not let an LLM drive a pentest. Current models hallucinate findings and cannot reason about the customer's specific architecture. We do use local LLMs we run in-house — to accelerate analysis, scaffold engagement-specific tooling, and pattern-match across our internal corpus. Customer material never goes to a commercial SaaS model.

  • No "compliance pentest" deliverable

    We work with customers who need to pass PCI / SOC2 / ISO audits. We do not run engagements where the deliverable is a stamp. The shape of work that fits us produces real findings against real attacker capability assumptions.

03 / surfaces we test

Where the work has been applied.

Web & API

Web applications, internal admin tools, REST and GraphQL APIs, gRPC services, server-side template engines, business-logic flaws.

Mobile applications

iOS, Android, Huawei (HMS), embedded mobile clients. Manual binary analysis, runtime hooking, side-channel analysis. Deeper here →

Cloud infrastructure

AWS, Azure, GCP. IAM-driven attack-path analysis, cross-account misconfiguration, custom infrastructure-as-code review. Deeper here →

AI / LLM systems

Prompt injection, data exfiltration, training-data leakage, model integrity, RAG-pipeline attacks. Deeper here →

OT / SCADA / ICS

Industrial control systems, refinery and pipeline networks, custom-protocol endpoints. Hardware-lab testing with real PLCs. Deeper here →

Hardware & embedded

Payment terminals, IoT devices, agent-banking hardware, secure elements, firmware extraction and analysis.

04 / deliverables

What the customer keeps.

Reproducible findings. Each finding includes the steps to reproduce, the proof-of-concept code or payloads where useful, the affected components, the realistic attacker model, and the recommended remediation path — written for the engineer who will fix it, not for the audit committee.

Custom tooling. The instrumentation, fuzzing harnesses, parsers, and analyzers built for the engagement are delivered with documentation. The customer can re-run them on every release.

Threat-model addendum. Where the engagement reveals threat-model gaps, we deliver a written addendum to the customer's existing threat model — not a parallel document that goes unread.

Walk-through with engineering. We brief the engineering team directly. Every finding is walked through with the engineers who will own the remediation.

05 / engagement shape

Scoped first, continuing after.

A first engagement is sized to a defined scope and surface. Most customer relationships continue beyond that — quarterly cadence, rotating scope as the surface evolves, with the methodology and tooling carrying over from one engagement to the next.

Duration depends entirely on the size and complexity of the surface, the depth of coverage requested, and whether we are running alongside the customer's engineering team or going at it cold. We scope these conversations against the actual surface, not against a pre-set table.

We accept short, well-defined pentests — and we are most useful as the embedded security team that knows the customer's stack and grows with it.

06 / start a conversation

Tell us what you are trying to defend.

email