
Academic research lab focused on privacy-preserving and secure AI/ML.
Secure AI Lab is an academic research laboratory focused on privacy-preserving and secure artificial intelligence. The lab conducts both fundamental and applied research aimed at advancing the theoretical foundations and practical deployment of trustworthy AI systems. Research Areas: - Privacy-preserving deep learning - Homomorphic encryption applied to AI/ML workflows - Secure multi-party computation for machine learning - Federated learning security and privacy - Differential privacy mechanisms Published Research and Frameworks: - Homomorphic encryption in federated learning: A framework embedding Fully Homomorphic Encryption (FHE) into the FL aggregation pipeline using CKKS (real-valued gradients) and BFV (integer-weight updates) schemes, enabling gradient averaging entirely in the encrypted domain. - SecPATE: An enhancement of the Private Aggregation of Teacher Ensembles (PATE) framework that incorporates Secure Multi-Party Computation (SMC) for privacy-preserving aggregation of teacher model predictions. - Pri-WeDec: A framework enabling inference on encrypted image data using FHE combined with a customized CNN, targeting weapon detection in digital forensics without exposing sensitive evidence to untrusted environments. Resources Provided: - Source code repositories (GitHub) - Academic publications and conference papers - Teaching materials - Research data and watchlists - Scholarship and funding information
Common questions about Secure AI Lab including features, pricing, alternatives, and user reviews.
Secure AI Lab is Academic research lab focused on privacy-preserving and secure AI/ML, developed by Secure AI Lab. It is a AI Security solution designed to help security teams with Security Research, Research, Adversarial ML.
Platform for privacy-protected AI/ML model training on sensitive data.
Secure multiparty data collaboration platform using TEEs for AI/ML workloads.