Loading...
AI security tools and solutions for protecting artificial intelligence systems, machine learning models, and AI-powered applications from cyber threats. Task: Open Source
Browse 12 security tools
CLI scanner that detects security threats in AI agent skills before installation.
Open-source CLI scanner for detecting security risks in AI agent skills.
Open-source framework for real-time LLM safety, policy & compliance enforcement.
Open-source LLM vulnerability scanner for AI red teaming and security testing.
Centralized gateway for accessing and securing AI models with routing & monitoring
Enterprise MCP gateway for managing, securing & controlling AI agent access to systems
AI observability platform for shadow AI discovery and inventory management
Fuzzing tool for testing and hardening AI application system prompts
Safety reasoning model for content classification and trust & safety apps
Open-source control plane for MCP tool traffic with inline policy enforcement
LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.