Pencheff

AI security

Attack strategies

Roleplay, payload splitting, obfuscation, encoding, jailbreak corpora, regression suites, and judge-backed scoring.

ScopeLLM Red Team

Test AI products before attackers do: prompt attacks, tool abuse, data leakage, unsafe output, guardrail bypass, multi-agent workflows, and runtime policy enforcement.

OutputUnified evidence

Findings, reports, dashboards, exports, integrations, and retests all read from the same normalized record.

MethodDeterministic first

Pencheff favors repeatable checks, then uses AI for triage, enrichment, orchestration, and remediation where it adds signal.

Coverage

What does Attack strategies test?

  • Roleplay, payload splitting, obfuscation, encoding, jailbreak corpora, regression suites, and judge-backed scoring.
  • This page is part of AI Security under LLM Red Team.
  • It links back into the broader red team models, agents, tools, and guardrails experience.
  • OWASP LLM Top 10 coverage for prompt injection, sensitive information disclosure, supply chain, data leakage, plugins, agency, overreliance, and model theft.
  • Jailbreak strategies, roleplay, encoding, payload splitting, multilingual variants, custom datasets, and judge-backed scoring.
  • Agentic tests for tool authorization, memory poisoning, context exfiltration, planner hijacking, and unsafe side effects.
  • Sentry runtime guardrails, HTTP sidecars, LiteLLM plugins, MCP middleware, PII, secrets, unsafe HTML, and tool authorization checks.
  • AI governance mapping to OWASP LLM, MITRE ATLAS, NIST AI RMF, EU AI Act, ISO/IEC 42001, GDPR, and SOC 2.

Execution

How does Pencheff run this?

  • Register an LLM endpoint, chatbot, model gateway, MCP host, or agent workflow.
  • Choose built-in categories, datasets, guardrails, custom prompts, and optional judge settings.
  • Run adversarial campaigns across prompt, tool, memory, retrieval, output, and policy paths.
  • Classify failures by category, strategy, severity, transcript, token cost, and guardrail recommendation.
  • Turn passing and failing prompts into regression suites for releases and model upgrades.

Evidence

What evidence does this produce?

  • Prompt, response, tool call, policy decision, transcript, category, strategy, judge result, and confidence.
  • Recommended guardrails with exact unsafe behavior, enforcement point, and regression prompt.
  • Token usage, model/provider metadata, retry behavior, and cost-oriented observability.
  • Governance mappings for AI risk, safety, privacy, and compliance programs.

Controls

How is this kept safe to run?

  • Tests can be run through HTTP, chat-completions, LiteLLM, MCP, or custom adapters.
  • Guardrail recommendations stay tied to the scan that exposed the failure.
  • Agentic testing focuses on authorization, context boundaries, and side-effect control.
  • Runtime policy checks can be placed before prompts, after responses, or around tools.

Documentation

Read the full reference.

FAQ

Common questions

What is Crescendo attack strategy in LLM red teaming?
Crescendo is a multi-turn jailbreak strategy where the attacker gradually escalates the conversation — starting with benign-seeming requests and incrementally steering the model toward policy violations. It exploits the fact that LLMs are more susceptible to harmful requests when they follow a series of contextually reasonable prior turns.
What is PAIR (Prompt Automatic Iterative Refinement)?
PAIR uses an attacker LLM to iteratively refine adversarial prompts based on the target model's responses. Starting from an initial jailbreak attempt, the attacker model analyses what failed and generates an improved prompt — repeating until the target produces a policy violation or the budget is exhausted.
Does Pencheff use an attacker LLM to generate adversarial prompts?
Yes. Pencheff uses attacker-LLM synthesis — a separate language model that generates, evaluates, and iterates adversarial prompts against the target. This enables semi-infinite prompt variation and cross-model attack transfer beyond what static prompt libraries can achieve.

Related

Keep exploring AI Security.