47 results found Sort:
- Filter by Primary Language:
- Python (30)
- Jupyter Notebook (9)
- C++ (2)
- TeX (1)
- +
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Created
2018-03-15
12,410 commits to main branch, last one 2 days ago
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Created
2019-10-15
2,707 commits to master branch, last one 3 months ago
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Created
2017-06-14
1,711 commits to master branch, last one 8 months ago
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models....
Created
2018-08-08
378 commits to master branch, last one 2 years ago
A Toolbox for Adversarial Robustness Research
Created
2018-11-29
309 commits to master branch, last one 2 years ago
A pytorch adversarial library for attack and defense methods on images and graphs
Created
2019-09-21
856 commits to master branch, last one 4 months ago
Raising the Cost of Malicious AI-Powered Image Editing
Created
2022-11-03
13 commits to main branch, last one about a year ago
🗣️ Tool to generate adversarial text examples and test machine learning models against them
Created
2018-08-08
15 commits to master branch, last one 6 years ago
Implementation of Papers on Adversarial Examples
Created
2018-01-27
24 commits to master branch, last one about a year ago
Adversarial attacks and defenses on Graph Neural Networks.
Created
2019-08-12
47 commits to master branch, last one about a year ago
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
dbms
privacy
paillier
security
k-anonymity
deep-learning
evasion-attack
machine-learning
poisoning-attacks
federated-learning
adversarial-attacks
adversarial-examples
differential-privacy
membership-inference
paillier-cryptosystem
homomorphic-encryption
model-inversion-attacks
adversarial-machine-learning
Created
2021-01-16
939 commits to main branch, last one 7 months ago
💡 Adversarial attacks on explanations and how to defend them
Created
2020-07-30
121 commits to master branch, last one 8 months ago
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
Created
2020-03-02
24 commits to master branch, last one 7 months ago
A curated list of awesome resources for adversarial examples in deep learning
Created
2017-11-27
9 commits to master branch, last one 5 years ago
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, and 2024)
Created
2021-06-29
37 commits to main branch, last one 4 months ago
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
Created
2019-05-28
403 commits to master branch, last one 2 years ago
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
Created
2021-09-06
86 commits to main branch, last one 14 days ago
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Created
2019-04-17
38 commits to master branch, last one 5 years ago
Library containing PyTorch implementations of various adversarial attacks and resources
Created
2020-11-24
147 commits to main branch, last one about a month ago
A curated list of academic events on AI Security & Privacy
Created
2021-10-04
70 commits to main branch, last one 3 months ago
Revisiting Transferable Adversarial Images (arXiv)
Created
2022-10-23
141 commits to main branch, last one about a month ago
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Created
2019-01-28
10 commits to master branch, last one 2 years ago
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Created
2020-03-01
10 commits to master branch, last one 3 years ago
Understanding and Improving Fast Adversarial Training [NeurIPS 2020]
Created
2020-07-06
20 commits to master branch, last one 3 years ago
Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTorch).
Created
2019-06-05
12 commits to master branch, last one 3 years ago
Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.
Created
2020-02-25
87 commits to master branch, last one 2 years ago
A Closer Look at Accuracy vs. Robustness
Created
2020-02-08
7 commits to master branch, last one 3 years ago
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial Optimization"
Created
2020-04-28
26 commits to master branch, last one 3 years ago
Code for "Adversarial attack by dropping information." (ICCV 2021)
Created
2021-04-12
29 commits to main branch, last one 3 years ago
A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.
Created
2022-12-03
7 commits to main branch, last one about a year ago