Loading...

Looking for alternatives to Promptfoo Guardrails? Adaptive LLM guardrails that self-improve via red team feedback loops. Browse 24 similar AI Security tools below, compare features side-by-side, and find the best fit for your security stack.
Firewall for LLM systems preventing prompt injection, data leaks & jailbreaks
Secures homegrown AI and GenAI applications against prompt injection and abuse
AI guardrail module protecting LLMs from prompt injection and jailbreak attacks
End-to-end LLM security platform protecting against attacks and data leakage
End-to-end LLM security platform protecting GenAI interactions & applications
AI security platform & LLM guardrail solution integrated with AWS.
Firewall protecting LLMs from prompt attacks, data leaks, and harmful outputs
AI firewall for runtime protection of AI models, applications, and agents
Real-time AI application security with trust scoring and guardrails
Runtime security layer for AI agents, RAG, and MCP with real-time controls
Safety reasoning model for content classification and trust & safety apps
Runtime guardrails for GenAI apps providing real-time threat detection & response
Enterprise AI firewall protecting AI agents, models, and chatbots from attacks
Real-time AI guardrails platform for detecting misuse, hallucinations & attacks
Guardrails for protecting LLM and agentic applications from harmful content
AI data gateway securing LLM interactions by monitoring and redacting sensitive data.
Secures AI-assisted dev environments from prompt injection, DLP, & shadow AI.
Context-aware access control for AI pipelines, LLMs, and multi-agent workflows.
AI guardrails tool for PII/PHI detection, masking & content filtering in LLM apps.
Open-source framework for real-time LLM safety, policy & compliance enforcement.
LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.
API gateway for managing, securing, and observing outbound LLM traffic.
Get strategic cybersecurity insights in your inbox