Use the same platform for sprint gates, release assurance, audit prep, AI product validation, executive risk, and continuous attack-surface monitoring.
Runtime DAST
Self-hosting
Operate the SaaS API, web app, observability, database, and workers in your environment.
Findings, reports, dashboards, exports, integrations, and retests all read from the same normalized record.
Pencheff favors repeatable checks, then uses AI for triage, enrichment, orchestration, and remediation where it adds signal.
Coverage
What does Self-hosting test?
- Operate the SaaS API, web app, observability, database, and workers in your environment.
- This page is part of Solutions under Deployment Models.
- It links back into the broader security programs without fragmented tooling experience.
- Passive and active reconnaissance, technology fingerprinting, endpoint inventory, and crawl expansion.
- Authenticated crawling for SPAs, role-aware flows, cookies, headers, JWTs, OAuth/OIDC, and MFA-sensitive areas.
- Injection coverage for SQL, NoSQL, command, SSTI, XXE, SSRF, LDAP, deserialization, path traversal, and file upload abuse.
- Client-side and protocol checks for XSS, DOM XSS, CSRF, CORS, clickjacking, cache poisoning, redirects, headers, WebSockets, and GraphQL.
- Verification probes that promote high-confidence results into replayable findings with request and response context.
Execution
How does Pencheff run this?
- Create a URL or API target with scope, auth material, allowed hosts, and rate limits.
- Map the surface with recon, crawl, endpoint discovery, and optional OpenAPI or traffic-derived routes.
- Run profile-controlled checks from quick validation through deep exploit-chain analysis.
- Re-test candidate issues with focused probes before they become confirmed findings.
- Attach evidence, severity, remediation, and compliance mappings to the unified findings stream.
Evidence
What evidence does this produce?
- HTTP request and response excerpts, affected URL, parameter, method, status code, and payload family.
- OAST callbacks, browser screenshots, chain notes, and exact reproduction steps where applicable.
- Authentication context, role assumptions, session notes, and guardrails used during assessment.
- OWASP, CWE, PCI DSS, SOC 2, ISO 27001, NIST, and HIPAA mappings for audit readers.
Controls
How is this kept safe to run?
- Scope allow-lists, profile depth, time budgets, and evidence requirements bound active testing.
- State-changing and destructive behavior can be constrained by target policy and profile selection.
- Findings are deduplicated against existing scan history and can be re-examined on demand.
- Authenticated material is scoped to the target and treated as assessment-only input.
Documentation
Read the full reference.
FAQ
Common questions
- Can Pencheff be deployed on-premises?
- Yes. Pencheff supports self-hosted deployment via Docker Compose and Kubernetes Helm charts. The entire platform — scanner engines, API, dashboard, and database — runs within your own infrastructure with no data leaving your environment.
- What are the infrastructure requirements for self-hosting Pencheff?
- A minimal self-hosted deployment requires 4 CPU cores and 8 GB RAM for a single-node setup. For concurrent deep scans, 8+ cores and 16 GB RAM are recommended. PostgreSQL and Redis are the only external service dependencies.
- Why would an organisation choose self-hosting over Pencheff's cloud?
- Self-hosting is typically chosen by regulated industries (financial services, healthcare, defence) where scan targets, credentials, and findings must never leave the organisation's network boundary — even to a trusted SaaS provider. It also satisfies air-gapped environment requirements.
- How are Pencheff updates applied in a self-hosted deployment?
- Pencheff releases container image updates that you pull and redeploy using your standard container orchestration tooling. Release notes are published with each version, and the API is versioned to ensure upgrade compatibility.
Related