LLM Guard Logo

LLM Guard

LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.

2,043
Visit website
2
Compare
Compare
1
MCPThe entire cybersecurity market, one prompt awayTry MCP Access

LLM Guard Description

LLM Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs) by offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks. It provides a range of features, including prompt scanners, anonymization, and output scanners, to ensure safe and secure interactions with LLMs. LLM Guard is an open-source solution that is easy to integrate and deploy in production environments, and is designed for easy integration and deployment in production environments. It is available for download and installation via pip, and is compatible with Python 3.9 or higher. The tool is constantly being improved and updated, and the community is encouraged to contribute to the package through bug reports, feature requests, and code contributions. For more information, please refer to the documentation and contribution guidelines.

LLM Guard FAQ

Common questions about LLM Guard including features, pricing, alternatives, and user reviews.

LLM Guard is LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.. It is a AI Security solution designed to help security teams with Open Source, Generative AI, Prompt Injection.

Have more questions? Browse our categories or search for specific tools.

ALTERNATIVES

Promptfoo Guardrails Logo

Adaptive LLM guardrails that self-improve via red team feedback loops.

0
CloudMatos Prompt Firewall Logo

Firewall for LLM systems preventing prompt injection, data leaks & jailbreaks

0
CyCraft XecGuard Logo

AI guardrail module protecting LLMs from prompt injection and jailbreak attacks

0
Tinfoil GPT-OSS Safeguard 120B Logo

Safety reasoning model for content classification and trust & safety apps

0
DeepKeep LLM Logo

End-to-end LLM security platform protecting against attacks and data leakage

0

Stay Updated with Mandos Brief

Get strategic cybersecurity insights in your inbox