AI Security for Machine Learning
Tools and resources for securing AI systems and protecting against AI-powered threats. Task: Machine LearningExplore 19 curated tools and resources
RELATED TASKS
PINNED
Promoted • 4 toolsWant your tool featured here?
Get maximum visibility with pinned placement
LATEST ADDITIONS
A platform that provides visibility, monitoring, and control over Large Language Models (LLMs) in production environments to detect and mitigate risks like hallucinations and data leakage.
A platform that provides visibility, monitoring, and control over Large Language Models (LLMs) in production environments to detect and mitigate risks like hallucinations and data leakage.
A data security and AI governance platform that provides unified control and management of data assets across hybrid cloud environments with focus on AI security and compliance.
A data security and AI governance platform that provides unified control and management of data assets across hybrid cloud environments with focus on AI security and compliance.
An AI-driven security automation platform that uses specialized agents to assist security teams in SOC operations, GRC, and threat hunting tasks.
An AI-driven security automation platform that uses specialized agents to assist security teams in SOC operations, GRC, and threat hunting tasks.
AI-powered platform that manages and monitors physical infrastructure systems while providing autonomous operation capabilities and smart city integration
AI-powered platform that manages and monitors physical infrastructure systems while providing autonomous operation capabilities and smart city integration
Security platform that provides protection, monitoring and governance for enterprise generative AI applications and LLMs against various threats including prompt injection and data poisoning.
Security platform that provides protection, monitoring and governance for enterprise generative AI applications and LLMs against various threats including prompt injection and data poisoning.
An automated red teaming and security testing platform that continuously evaluates conversational AI applications for vulnerabilities and compliance with security standards.
An automated red teaming and security testing platform that continuously evaluates conversational AI applications for vulnerabilities and compliance with security standards.
A security platform that provides monitoring, control, and protection mechanisms for organizations using generative AI and large language models.
A security platform that provides monitoring, control, and protection mechanisms for organizations using generative AI and large language models.
TensorOpera AI is a platform that provides tools and services for developing, deploying, and scaling generative AI applications across various domains.
TensorOpera AI is a platform that provides tools and services for developing, deploying, and scaling generative AI applications across various domains.
Tumeryk is a comprehensive security solution for large language models and generative AI systems, offering risk assessment, protection against jailbreaks, content moderation, and policy enforcement.
Tumeryk is a comprehensive security solution for large language models and generative AI systems, offering risk assessment, protection against jailbreaks, content moderation, and policy enforcement.
TrojAI is an AI security platform that detects vulnerabilities in AI models and defends against attacks on AI applications.
TrojAI is an AI security platform that detects vulnerabilities in AI models and defends against attacks on AI applications.
Lakera is an automated safety and security assessment tool for GenAI applications
Lakera is an automated safety and security assessment tool for GenAI applications
LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.
LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.
Adversa AI is a cybersecurity company that provides solutions for securing and hardening machine learning, artificial intelligence, and large language models against adversarial attacks, privacy issues, and safety incidents across various industries.
Adversa AI is a cybersecurity company that provides solutions for securing and hardening machine learning, artificial intelligence, and large language models against adversarial attacks, privacy issues, and safety incidents across various industries.
CalypsoAI is a platform that provides centralized security, observability, and control for deploying and scaling large language models and generative AI across an enterprise.
CalypsoAI is a platform that provides centralized security, observability, and control for deploying and scaling large language models and generative AI across an enterprise.
WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.
WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.
SentinelOne Purple AI is an AI-powered security analyst solution that simplifies threat hunting and investigations, empowers analysts, accelerates security operations, and safeguards data.
SentinelOne Purple AI is an AI-powered security analyst solution that simplifies threat hunting and investigations, empowers analysts, accelerates security operations, and safeguards data.
DIANNA is an AI-powered cybersecurity companion from Deep Instinct that analyzes and explains unknown threats, offering malware analysis and translating code intent into natural language.
DIANNA is an AI-powered cybersecurity companion from Deep Instinct that analyzes and explains unknown threats, offering malware analysis and translating code intent into natural language.
FortiAI is an AI assistant that uses generative AI combined with Fortinet's security expertise to guide analysts through threat investigation, response automation, and complex SecOps workflows.
FortiAI is an AI assistant that uses generative AI combined with Fortinet's security expertise to guide analysts through threat investigation, response automation, and complex SecOps workflows.
Infinity Platform / Infinity AI is an AI-powered threat intelligence and generative AI service that combines AI-powered threat intelligence with generative AI capabilities for comprehensive threat prevention, automated threat response, and efficient security administration.
Infinity Platform / Infinity AI is an AI-powered threat intelligence and generative AI service that combines AI-powered threat intelligence with generative AI capabilities for comprehensive threat prevention, automated threat response, and efficient security administration.