134 results found Sort:
- Filter by Primary Language:
- Python (93)
- Jupyter Notebook (17)
- JavaScript (2)
- Go (2)
- C++ (1)
- Zig (1)
- PureBasic (1)
- Rust (1)
- TeX (1)
- +
Adversary Emulation Framework
Created
2019-01-17
4,828 commits to master branch, last one 17 hours ago
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Created
2018-03-15
12,410 commits to main branch, last one 15 days ago
Data augmentation for NLP
Created
2019-03-21
738 commits to master branch, last one 2 years ago
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Created
2019-10-15
2,707 commits to master branch, last one 4 months ago
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Created
2017-06-14
1,711 commits to master branch, last one 9 months ago
A unified evaluation framework for large language models
Created
2023-06-13
259 commits to main branch, last one 2 months ago
PyTorch implementation of adversarial attacks [torchattacks]
Created
2019-04-18
644 commits to master branch, last one 11 months ago
Must-read Papers on Textual Adversarial Attack and Defense
Created
2019-06-09
176 commits to master branch, last one 29 days ago
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models....
Created
2018-08-08
378 commits to master branch, last one 2 years ago
A Toolbox for Adversarial Robustness Research
Created
2018-11-29
309 commits to master branch, last one 2 years ago
A pytorch adversarial library for attack and defense methods on images and graphs
Created
2019-09-21
856 commits to master branch, last one 4 months ago
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
Created
2024-01-09
407 commits to main branch, last one 5 days ago
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanatio...
Created
2017-11-03
466 commits to master branch, last one 6 months ago
A curated list of adversarial attacks and defenses papers on graph-structured data.
Created
2019-04-26
176 commits to master branch, last one 11 months ago
An Open-Source Package for Textual Adversarial Attack.
Created
2020-02-29
686 commits to master branch, last one 2 years ago
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
Created
2020-02-17
176 commits to master branch, last one about a year ago
A Harder ImageNet Test Set (CVPR 2021)
Created
2019-06-10
16 commits to master branch, last one 3 years ago
Raising the Cost of Malicious AI-Powered Image Editing
Created
2022-11-03
13 commits to main branch, last one about a year ago
A Model for Natural Language Attack on Text Classification and Inference
Created
2019-09-03
23 commits to master branch, last one 2 years ago
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Created
2020-03-19
190 commits to main branch, last one about a year ago
Implementation of Papers on Adversarial Examples
Created
2018-01-27
24 commits to master branch, last one about a year ago
Adversarial attacks and defenses on Graph Neural Networks.
Created
2019-08-12
47 commits to master branch, last one about a year ago
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
dbms
privacy
paillier
security
k-anonymity
deep-learning
evasion-attack
machine-learning
poisoning-attacks
federated-learning
adversarial-attacks
adversarial-examples
differential-privacy
membership-inference
paillier-cryptosystem
homomorphic-encryption
model-inversion-attacks
adversarial-machine-learning
Created
2021-01-16
939 commits to main branch, last one 7 months ago
A suite for hunting suspicious targets, expose domains and phishing discovery
Created
2023-07-16
80 commits to main branch, last one 6 months ago
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Created
2023-09-04
230 commits to main branch, last one 10 months ago
PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Sa...
Created
2022-10-25
2 commits to main branch, last one 2 years ago
💡 Adversarial attacks on explanations and how to defend them
Created
2020-07-30
124 commits to master branch, last one 3 days ago
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Created
2020-05-20
32 commits to master branch, last one about a year ago
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Created
2020-05-11
1,525 commits to main branch, last one 3 months ago
🎃 PumpBin is an Implant Generation Platform.
Created
2024-06-11
56 commits to main branch, last one 4 months ago