Test AI products before attackers do: prompt attacks, tool abuse, data leakage, unsafe output, guardrail bypass, multi-agent workflows, and runtime policy enforcement.
AI security
OWASP LLM Top 10
Prompt injection, insecure output, training data exposure, DoS, supply chain, data leakage, plugins, agency, overreliance, and model theft.
Findings, reports, dashboards, exports, integrations, and retests all read from the same normalized record.
Pencheff favors repeatable checks, then uses AI for triage, enrichment, orchestration, and remediation where it adds signal.
Coverage
What does OWASP LLM Top 10 test?
- Prompt injection, insecure output, training data exposure, DoS, supply chain, data leakage, plugins, agency, overreliance, and model theft.
- This page is part of AI Security under LLM Red Team.
- It links back into the broader red team models, agents, tools, and guardrails experience.
- OWASP LLM Top 10 coverage for prompt injection, sensitive information disclosure, supply chain, data leakage, plugins, agency, overreliance, and model theft.
- Jailbreak strategies, roleplay, encoding, payload splitting, multilingual variants, custom datasets, and judge-backed scoring.
- Agentic tests for tool authorization, memory poisoning, context exfiltration, planner hijacking, and unsafe side effects.
- Sentry runtime guardrails, HTTP sidecars, LiteLLM plugins, MCP middleware, PII, secrets, unsafe HTML, and tool authorization checks.
- AI governance mapping to OWASP LLM, MITRE ATLAS, NIST AI RMF, EU AI Act, ISO/IEC 42001, GDPR, and SOC 2.
Execution
How does Pencheff run this?
- Register an LLM endpoint, chatbot, model gateway, MCP host, or agent workflow.
- Choose built-in categories, datasets, guardrails, custom prompts, and optional judge settings.
- Run adversarial campaigns across prompt, tool, memory, retrieval, output, and policy paths.
- Classify failures by category, strategy, severity, transcript, token cost, and guardrail recommendation.
- Turn passing and failing prompts into regression suites for releases and model upgrades.
Evidence
What evidence does this produce?
- Prompt, response, tool call, policy decision, transcript, category, strategy, judge result, and confidence.
- Recommended guardrails with exact unsafe behavior, enforcement point, and regression prompt.
- Token usage, model/provider metadata, retry behavior, and cost-oriented observability.
- Governance mappings for AI risk, safety, privacy, and compliance programs.
Controls
How is this kept safe to run?
- Tests can be run through HTTP, chat-completions, LiteLLM, MCP, or custom adapters.
- Guardrail recommendations stay tied to the scan that exposed the failure.
- Agentic testing focuses on authorization, context boundaries, and side-effect control.
- Runtime policy checks can be placed before prompts, after responses, or around tools.
Documentation
Read the full reference.
References
Authoritative sources
FAQ
Common questions
- What is the OWASP LLM Top 10?
- The OWASP LLM Top 10 is the industry-standard classification of the most critical security risks for large language model applications, covering: LLM01 Prompt Injection, LLM02 Insecure Output Handling, LLM03 Training Data Poisoning, LLM04 Model Denial of Service, LLM05 Supply Chain Vulnerabilities, LLM06 Sensitive Information Disclosure, LLM07 Insecure Plugin Design, LLM08 Excessive Agency, LLM09 Overreliance, and LLM10 Model Theft.
- Does Pencheff cover all 10 categories of the OWASP LLM Top 10?
- Pencheff covers all dynamically testable categories in the OWASP LLM Top 10 (2025) — primarily LLM01 through LLM09. LLM03 (training data poisoning) and LLM10 (model theft) require access to training infrastructure and model weights, which are outside the scope of black-box assessment.
- How do I prove OWASP LLM Top 10 compliance to auditors or customers?
- Run a Pencheff LLM red team assessment. The resulting report maps every finding to its OWASP LLM Top 10 category with evidence, and includes a compliance matrix showing test coverage per category — suitable for audit submission or customer security questionnaire responses.
Related