Groq adds Llama Prompt Guard 2 models for prompt injection detection
Action Required
Organizations can proactively mitigate the risk of prompt injection attacks and improve the security of their LLM applications by integrating these new models.
AI Impact Summary
Groq has released Llama Prompt Guard 2, two new specialized models designed to detect prompt injection attacks and jailbreaks in LLM applications. These models offer high accuracy and low latency, providing an optional security layer for organizations. Integrating these models can significantly reduce the risk of malicious prompts compromising LLM outputs, particularly in sensitive applications.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- medium