Pencheff is built around the principle that evidence-backed, adversarial testing should be as rigorous as a formal audit — readable by engineers, executives, and compliance teams on the same page.
Risk, reporting, and compliance
Our auditors
Guidance for readers validating scope, evidence, and compliance output.
Findings, reports, dashboards, exports, integrations, and retests all read from the same normalized record.
Pencheff favors repeatable checks, then uses AI for triage, enrichment, orchestration, and remediation where it adds signal.
Coverage
What does Our auditors test?
- Guidance for readers validating scope, evidence, and compliance output.
- This page is part of Company under Our Practice.
- It links back into the broader the practice behind the platform experience.
- Executive dashboard, letter grade, risk trends, severity rollups, and portfolio posture.
- Technical dossier with findings, reproduction, affected components, remediation, evidence, and re-examination state.
- Compliance mapping for OWASP, PCI DSS, SOC 2, NIST, ISO 27001, HIPAA, OWASP LLM, MITRE ATLAS, NIST AI RMF, EU AI Act, and GDPR.
- Threat modeling with STRIDE, DREAD, attack trees, abuse cases, mitigations, and scan context.
- Unified findings stream, AI triage, advisory enrichment, comments, suppressions, and audit appendices.
Execution
How does Pencheff run this?
- Collect findings from runtime, repo, supply chain, infrastructure, AI, and manual sources.
- Normalize severity, confidence, category, exploitability, reachability, and owner state.
- Generate executive, engineering, compliance, or retest views from the same source record.
- Track suppression, comments, fixes, re-examinations, and residual risk across scan history.
- Export reports and feed integrations without losing the underlying evidence chain.
Evidence
What evidence does this produce?
- Executive summaries, trend charts, severity counts, grade drivers, and business impact language.
- Technical evidence, scanner provenance, reproduction steps, remediation, and references.
- Framework control mappings and audit appendix entries tied to actual findings.
- Retest and verification history for closure and residual risk decisions.
Controls
How is this kept safe to run?
- Compliance rollups are deterministic and recomputed from finding state.
- Triage output distinguishes verified facts from advisory context.
- Reports inherit the same authorization and workspace boundaries as scans.
- Executives and auditors can read summaries while engineers keep deep evidence.
Documentation
Read the full reference.
FAQ
Common questions
- Who are the security practitioners behind Pencheff?
- Pencheff is built by practitioners with hands-on experience in application penetration testing, red team engagements, and compliance assessment. The platform encodes the same techniques, evidence standards, and reporting rigour used in institutional-grade manual assessments.
- Can Pencheff reports serve as a substitute for manual penetration testing?
- Pencheff produces audit-grade reports that satisfy the evidence requirements for SOC 2, PCI-DSS, ISO 27001, and HIPAA assessments. For programmes requiring a named human penetration tester or a CREST/OSCP-certified assessor signature, Pencheff can be combined with a human review engagement.
- How does Pencheff maintain assessment quality at scale?
- Every finding promotion requires re-verification with a confirmatory payload. Findings that cannot be reproduced are automatically discarded. The methodology is versioned (currently v4.2) and the compliance mapping is updated with each framework revision — so report quality is consistent regardless of scan volume.
Related