LLM Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs) by offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks. It provides a range of features, including prompt scanners, anonymization, and output scanners, to ensure safe and secure interactions with LLMs. LLM Guard is an open-source solution that is easy to integrate and deploy in production environments, and is designed for easy integration and deployment in production environments. It is available for download and installation via pip, and is compatible with Python 3.9 or higher. The tool is constantly being improved and updated, and the community is encouraged to contribute to the package through bug reports, feature requests, and code contributions. For more information, please refer to the documentation and contribution guidelines.
FEATURES
SIMILAR TOOLS
A data security and AI governance platform that provides unified control and management of data assets across hybrid cloud environments with focus on AI security and compliance.
TensorOpera AI is a platform that provides tools and services for developing, deploying, and scaling generative AI applications across various domains.
CalypsoAI is a platform that provides centralized security, observability, and control for deploying and scaling large language models and generative AI across an enterprise.
An automated red teaming and security testing platform that continuously evaluates conversational AI applications for vulnerabilities and compliance with security standards.
A platform that provides visibility, monitoring, and control over Large Language Models (LLMs) in production environments to detect and mitigate risks like hallucinations and data leakage.
Apex AI Security Platform provides security, management, and visibility for enterprise use of generative AI technologies.
A security platform that provides monitoring, control, and protection mechanisms for organizations using generative AI and large language models.
WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.
AI Access Security is a tool for managing and securing generative AI application usage in organizations, offering visibility, control, and protection features.
PINNED

Mandos
Fractional CISO service that helps B2B companies implement security leadership to win enterprise deals, achieve compliance, and develop strategic security programs.

Checkmarx SCA
A software composition analysis tool that identifies vulnerabilities, malicious code, and license risks in open source dependencies throughout the software development lifecycle.

Orca Security
A cloud-native application protection platform that provides agentless security monitoring, vulnerability management, and compliance capabilities across multi-cloud environments.

DryRun
A GitHub application that performs automated security code reviews by analyzing contextual security aspects of code changes during pull requests.