Tools and resources for securing AI systems and protecting against AI-powered threats.
Explore 28 curated tools and resources
An AI-powered career platform that automates the creation of cybersecurity job application materials and provides company-specific insights for job seekers.
A software composition analysis tool that identifies vulnerabilities, malicious code, and license risks in open source dependencies throughout the software development lifecycle.
A cloud-native web application and API security solution that uses contextual AI to protect against known and zero-day threats without signature-based detection.
A cloud-native application protection platform that provides agentless security monitoring, vulnerability management, and compliance capabilities across multi-cloud environments.
A GitHub application that performs automated security code reviews by analyzing contextual security aspects of code changes during pull requests.
Wiz Cloud Security Platform is a cloud-native security platform that enables security, dev, and devops to work together in a self-service model, detecting and preventing cloud security threats in real-time.
AI Access Security is a tool for managing and securing generative AI application usage in organizations, offering visibility, control, and protection features.
Apex AI Security Platform provides security, management, and visibility for enterprise use of generative AI technologies.
Lakera is an automated safety and security assessment tool for GenAI applications
LLM Guard is a security toolkit that enhances the safety and security of interactions with Large Language Models (LLMs) by providing features like sanitization, harmful language detection, data leakage prevention, and resistance against prompt injection attacks.
Adversa AI is a cybersecurity company that provides solutions for securing and hardening machine learning, artificial intelligence, and large language models against adversarial attacks, privacy issues, and safety incidents across various industries.
CalypsoAI is a platform that provides centralized security, observability, and control for deploying and scaling large language models and generative AI across an enterprise.
WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.
Mindgard is a continuous automated red teaming platform that enables security teams to identify and remediate vulnerabilities in AI systems, including generative AI and large language models.
Sense Defence is a next-generation web security suite that leverages AI to provide real-time threat detection and blocking.
SentinelOne Purple AI is an AI-powered security analyst solution that simplifies threat hunting and investigations, empowers analysts, accelerates security operations, and safeguards data.
DIANNA is an AI-powered cybersecurity companion from Deep Instinct that analyzes and explains unknown threats, offering malware analysis and translating code intent into natural language.
FortiAI is an AI assistant that uses generative AI combined with Fortinet's security expertise to guide analysts through threat investigation, response automation, and complex SecOps workflows.
Infinity Platform / Infinity AI is an AI-powered threat intelligence and generative AI service that combines AI-powered threat intelligence with generative AI capabilities for comprehensive threat prevention, automated threat response, and efficient security administration.
VIDOC is an AI-powered security tool that automates code review, detects and fixes vulnerabilities, and monitors external security, ensuring the integrity of both human-written and AI-generated code in software development pipelines.