LLM Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs) by offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks. It provides a range of features, including prompt scanners, anonymization, and output scanners, to ensure safe and secure interactions with LLMs. LLM Guard is an open-source solution that is easy to integrate and deploy in production environments, and is designed for easy integration and deployment in production environments. It is available for download and installation via pip, and is compatible with Python 3.9 or higher. The tool is constantly being improved and updated, and the community is encouraged to contribute to the package through bug reports, feature requests, and code contributions. For more information, please refer to the documentation and contribution guidelines.
FEATURES
ALTERNATIVES
Mindgard is a continuous automated red teaming platform that enables security teams to identify and remediate vulnerabilities in AI systems, including generative AI and large language models.
Zania is an AI-driven platform that automates security and compliance tasks using autonomous agents for security inquiries, compliance assessments, and privacy regulation adherence.
Security platform that provides protection, monitoring and governance for enterprise generative AI applications and LLMs against various threats including prompt injection and data poisoning.
An automated red teaming and security testing platform that continuously evaluates conversational AI applications for vulnerabilities and compliance with security standards.
WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.
A data security and AI governance platform that provides unified control and management of data assets across hybrid cloud environments with focus on AI security and compliance.
AI Access Security is a tool for managing and securing generative AI application usage in organizations, offering visibility, control, and protection features.
Lakera is an automated safety and security assessment tool for GenAI applications
PINNED

Mandos Brief Newsletter
A weekly newsletter providing cybersecurity leadership insights, industry updates, and strategic guidance for security professionals advancing to management positions.

PTJunior
An AI-powered penetration testing platform that autonomously discovers, exploits, and documents vulnerabilities while generating NIST-compliant reports.

CTIChef.com Detection Feeds
A tiered cyber threat intelligence service providing detection rules from public repositories with varying levels of analysis, processing, and guidance for security teams.

OSINTLeak
OSINTLeak is a tool for discovering and analyzing leaked sensitive information across various online sources to identify potential security risks.

ImmuniWeb® Discovery
ImmuniWeb Discovery is an attack surface management platform that continuously monitors an organization's external digital assets for security vulnerabilities, misconfigurations, and threats across domains, applications, cloud resources, and the dark web.

Checkmarx SCA
A software composition analysis tool that identifies vulnerabilities, malicious code, and license risks in open source dependencies throughout the software development lifecycle.

Orca Security
A cloud-native application protection platform that provides agentless security monitoring, vulnerability management, and compliance capabilities across multi-cloud environments.

DryRun
A GitHub application that performs automated security code reviews by analyzing contextual security aspects of code changes during pull requests.