Pencheff

Code security

SaaS app

Dashboards, reports, integrations, schedules, and multi-workspace operations.

ScopeDeployment Models

Use the same platform for sprint gates, release assurance, audit prep, AI product validation, executive risk, and continuous attack-surface monitoring.

OutputUnified evidence

Findings, reports, dashboards, exports, integrations, and retests all read from the same normalized record.

MethodDeterministic first

Pencheff favors repeatable checks, then uses AI for triage, enrichment, orchestration, and remediation where it adds signal.

Coverage

What does SaaS app test?

  • Dashboards, reports, integrations, schedules, and multi-workspace operations.
  • This page is part of Solutions under Deployment Models.
  • It links back into the broader security programs without fragmented tooling experience.
  • Semgrep OSS packs, Bandit, gosec, Brakeman, ESLint security, tree-sitter rules, and niche-language scaffolds.
  • Secret detection with gitleaks and suspicious-code indicators with YARA-style patterns.
  • GitHub repository connection, webhook-triggered scans, hardlink staging, gitignore-aware filtering, and default-deny controls.
  • SARIF and GitHub check run output so developers see findings where they work.
  • Auto-fix preparation for Semgrep autofix, SCA version bumps, and reviewer-friendly patch synthesis.

Execution

How does Pencheff run this?

  • Connect or register a repository and choose a branch, scan profile, and scanner policy.
  • Stage the source safely, fan out language-specific scanners, and capture raw scanner output.
  • Normalize results into repo findings with file, line, rule, severity, scanner, and remediation metadata.
  • Merge code results with SCA, IaC, secrets, and runtime context to reduce duplicate triage.
  • Send annotations, SARIF, reports, fix PRs, or dashboard tasks depending on the workflow.

Evidence

What evidence does this produce?

  • File path, line number, rule id, scanner name, confidence, language, and vulnerable snippet context.
  • Suggested fix, fixed-version data when applicable, and status across suppressions or rechecks.
  • GitHub check output, SARIF upload, comments, and links back into the finding record.
  • Cross-finding signals when a code pattern aligns with runtime exploitation.

Controls

How is this kept safe to run?

  • Scanner choices are explicit and permissively licensed where used in the repo pipeline.
  • Secrets are handled as findings rather than echoed into broad UI surfaces.
  • CI gates can be tuned by severity, reachability, policy, and target branch.
  • Generated fixes remain reviewer-owned and trace back to original scanner evidence.

Documentation

Read the full reference.

Related

Keep exploring Solutions.