OWASP Top 10 for LLMs (2025)
The landscape of AI security is moving fast. The 2025 update to the OWASP Top 10 introduces critical new risks like System Prompt Leakage and Vector Weaknesses.
The Top 10 Vulnerabilities
Here is the definitive list of risks you need to secure against in 2025.
Prompt Injection
Attackers manipulate LLM input to override system instructions. Includes direct jailbreaks and indirect injection via external data sources.
Sensitive Information Disclosure
LLMs inadvertently revealing PII, proprietary algorithms, or confidential business data in their responses.
Supply Chain Vulnerabilities
Risks from third-party models, datasets, and plugins. Compromised pre-trained models can contain backdoors.
Data & Model Poisoning
Manipulation of training data or fine-tuning datasets to introduce biases or vulnerabilities into the model.
Excessive Agency
New for Agents: Granting LLMs too much autonomy to execute actions (e.g., reading emails, deleting files) without human approval.
System Prompt Leakage
New Entry: Attackers tricking the model into revealing its internal system prompt, exposing business logic and intellectual property.
Securing Against the Top 10
Traditional security tools are insufficient for these semantic threats. You need a purpose-built AI security platform.
Railguard's Defense Matrix
- For LLM01 (Injection): Our heuristic and intent-based firewall blocks malicious prompts before they reach the model.
- For LLM02 (Data Leakage): Real-time PII redaction ensures sensitive data is never sent to external model providers.
- For LLM06 (Agency): Our "Human-in-the-loop" policy engine requires approval for high-risk actions.
Audit Your AI Stack
Run a free automated scan to see if your applications are vulnerable to the OWASP Top 10.