Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Jan 10, 2025 - Python
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Interpretability and explainability of data and machine learning models
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
Paddle with Decentralized Trust based on Xuperchain
Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks
Hands on workshop material evaluating performance, fairness and robustness of models
Security protocols for estimating adversarial robustness of machine learning models for both tabular and image datasets. This package implements a set of evasion attacks based on metaheuristic optimization algorithms, and complex cost functions to give reliable results for tabular problems.
Add a description, image, and links to the trusted-ai topic page so that developers can more easily learn about it.
To associate your repository with the trusted-ai topic, visit your repo's landing page and select "manage topics."