LLM Guard Logo

LLM Guard

0
Free
1 saves
Updated 11 March 2025
Visit Website

LLM Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs) by offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks. It provides a range of features, including prompt scanners, anonymization, and output scanners, to ensure safe and secure interactions with LLMs. LLM Guard is an open-source solution that is easy to integrate and deploy in production environments, and is designed for easy integration and deployment in production environments. It is available for download and installation via pip, and is compatible with Python 3.9 or higher. The tool is constantly being improved and updated, and the community is encouraged to contribute to the package through bug reports, feature requests, and code contributions. For more information, please refer to the documentation and contribution guidelines.

FEATURES

SIMILAR TOOLS

A data security and AI governance platform that provides unified control and management of data assets across hybrid cloud environments with focus on AI security and compliance.

Commercial

TensorOpera AI is a platform that provides tools and services for developing, deploying, and scaling generative AI applications across various domains.

Commercial

CalypsoAI is a platform that provides centralized security, observability, and control for deploying and scaling large language models and generative AI across an enterprise.

Commercial

An automated red teaming and security testing platform that continuously evaluates conversational AI applications for vulnerabilities and compliance with security standards.

Commercial

A platform that provides visibility, monitoring, and control over Large Language Models (LLMs) in production environments to detect and mitigate risks like hallucinations and data leakage.

Commercial

Apex AI Security Platform provides security, management, and visibility for enterprise use of generative AI technologies.

Commercial

A security platform that provides monitoring, control, and protection mechanisms for organizations using generative AI and large language models.

Commercial

WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.

Commercial

AI Access Security is a tool for managing and securing generative AI application usage in organizations, offering visibility, control, and protection features.

Commercial
CyberSecTools logoCyberSecTools

Explore the largest curated directory of cybersecurity tools and resources to enhance your security practices. Find the right solution for your domain.

Operated by:

Mandos Cyber • KVK: 97994448

Netherlands • contact@mandos.io

Copyright © 2025 - All rights reserved