Run web, API, code, dependency, cloud, AI, and internal-network assessments from one queue with unified findings, evidence, remediation, and audit output.
Platform detail
MCP toolkit
Tool-calling security automation exposed through the Pencheff MCP server.
Findings, reports, dashboards, exports, integrations, and retests all read from the same normalized record.
Pencheff favors repeatable checks, then uses AI for triage, enrichment, orchestration, and remediation where it adds signal.
Coverage
What does MCP toolkit test?
- Tool-calling security automation exposed through the Pencheff MCP server.
- This page is part of Platform under Operational Core.
- It links back into the broader a complete adversarial security platform experience.
- Workspace-aware target registration, scan ownership, and reusable scope settings.
- Unified finding records across runtime, source, dependency, infrastructure, AI, and manual evidence.
- Severity, reachability, exploitability, confidence, affected asset, and remediation metadata.
- Dashboards for status, risk, open work, rechecks, and audit-ready reporting.
- Exports for executive readers, engineers, compliance teams, and downstream systems.
Execution
How does Pencheff run this?
- Register the target or choose an existing workspace asset.
- Select a profile that controls depth, safety, time budget, and evidence requirements.
- Run deterministic checks first, then enrich high-signal leads with agentic analysis where useful.
- Deduplicate findings, preserve raw evidence, and attach remediation guidance.
- Route the output into dashboards, reports, integrations, schedules, and retest loops.
Evidence
What evidence does this produce?
- Finding title, severity, affected component, CWE or category, confidence, and status.
- Reproduction notes, scanner provenance, request or trace evidence where applicable.
- Remediation guidance written for the observed behavior rather than a generic checklist.
- Compliance mappings, owner state, comments, and re-examination history.
Controls
How is this kept safe to run?
- Authorized testing boundaries remain explicit at target creation.
- Credentials and secrets are handled as scoped assessment inputs.
- Operator-facing output separates confirmed issues from informational context.
- Every item is designed to be traceable from summary to source evidence.
Documentation
Read the full reference.
Related