Loading...

Looking for alternatives to Lunar.dev AI Gateway? API gateway for managing, securing, and observing outbound LLM traffic. Browse 14 similar AI Security tools below, compare features side-by-side, and find the best fit for your security stack.
Adaptive LLM guardrails that self-improve via red team feedback loops.
Firewall for LLM systems preventing prompt injection, data leaks & jailbreaks
Secures homegrown AI and GenAI applications against prompt injection and abuse
AI guardrail module protecting LLMs from prompt injection and jailbreak attacks
End-to-end LLM security platform protecting GenAI interactions & applications
Runtime security layer for AI agents, RAG, and MCP with real-time controls
End-to-end LLM security platform protecting against attacks and data leakage
Centralized gateway for accessing and securing AI models with routing & monitoring
AI security platform & LLM guardrail solution integrated with AWS.
Secures AI-assisted dev environments from prompt injection, DLP, & shadow AI.
Open-source framework for real-time LLM safety, policy & compliance enforcement.
LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.
Get strategic cybersecurity insights in your inbox