AI Security for Machine Learning
Tools and resources for securing AI systems and protecting against AI-powered threats. Task: Machine Learning
Explore 18 curated tools and resources
RELATED TASKS
PINNED

InfoSecHired
An AI-powered career platform that automates the creation of cybersecurity job application materials and provides company-specific insights for job seekers.

Checkmarx SCA
A software composition analysis tool that identifies vulnerabilities, malicious code, and license risks in open source dependencies throughout the software development lifecycle.

Check Point CloudGuard WAF
A cloud-native web application and API security solution that uses contextual AI to protect against known and zero-day threats without signature-based detection.

Orca Security
A cloud-native application protection platform that provides agentless security monitoring, vulnerability management, and compliance capabilities across multi-cloud environments.

DryRun
A GitHub application that performs automated security code reviews by analyzing contextual security aspects of code changes during pull requests.

Wiz
Wiz Cloud Security Platform is a cloud-native security platform that enables security, dev, and devops to work together in a self-service model, detecting and preventing cloud security threats in real-time.
LATEST ADDITIONS
A data security and AI governance platform that provides unified control and management of data assets across hybrid cloud environments with focus on AI security and compliance.
An AI-driven security automation platform that uses specialized agents to assist security teams in SOC operations, GRC, and threat hunting tasks.
AI-powered platform that manages and monitors physical infrastructure systems while providing autonomous operation capabilities and smart city integration
Security platform that provides protection, monitoring and governance for enterprise generative AI applications and LLMs against various threats including prompt injection and data poisoning.
An automated red teaming and security testing platform that continuously evaluates conversational AI applications for vulnerabilities and compliance with security standards.
A security platform that provides monitoring, control, and protection mechanisms for organizations using generative AI and large language models.
TensorOpera AI is a platform that provides tools and services for developing, deploying, and scaling generative AI applications across various domains.
Tumeryk is a comprehensive security solution for large language models and generative AI systems, offering risk assessment, protection against jailbreaks, content moderation, and policy enforcement.
TrojAI is an AI security platform that detects vulnerabilities in AI models and defends against attacks on AI applications.
Lakera is an automated safety and security assessment tool for GenAI applications
LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.
Adversa AI is a cybersecurity company that provides solutions for securing and hardening machine learning, artificial intelligence, and large language models against adversarial attacks, privacy issues, and safety incidents across various industries.
CalypsoAI is a platform that provides centralized security, observability, and control for deploying and scaling large language models and generative AI across an enterprise.
WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.