29 results found Sort:

427
2.0k
mit
39
A Python package to assess and improve fairness of machine learning models.
Created 2018-05-15
920 commits to main branch, last one 4 days ago
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Created 2020-07-06
1,970 commits to main branch, last one 4 months ago
😎 Everything about class-imbalanced/long-tail learning: papers, codes, frameworks, and libraries | 有关类别不平衡/长尾学习的一切:论文、代码、框架与库
Created 2020-03-05
52 commits to master branch, last one about a year ago
66
478
apache-2.0
15
A library for generating and evaluating synthetic tabular data for privacy, fairness and data augmentation.
Created 2022-03-18
165 commits to main branch, last one 2 months ago
Tensorflow's Fairness Evaluation and Visualization Toolkit
Created 2019-09-30
330 commits to master branch, last one 3 days ago
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Created 2021-05-18
9 commits to main branch, last one 3 years ago
Fair Resource Allocation in Federated Learning (ICLR '20)
Created 2019-05-24
22 commits to master branch, last one 3 years ago
14
175
mit
6
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in ca...
Created 2020-03-11
274 commits to develop branch, last one about a year ago
21
167
bsd-2-clause
15
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Created 2020-07-01
17 commits to main branch, last one 3 years ago
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Created 2019-02-12
89 commits to master branch, last one 3 years ago
21
117
other
10
Code accompanying our papers on the "Generative Distributional Control" framework
Created 2021-03-05
12 commits to master branch, last one 2 years ago
5
103
other
13
Train Gradient Boosting models that are both high-performance *and* Fair!
Created 2022-03-28
2,605 commits to main-fairgbm branch, last one 6 months ago
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
Created 2018-08-02
294 commits to master branch, last one 3 years ago
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Created 2024-09-20
183 commits to main branch, last one a day ago
Flexible tool for bias detection, visualization, and mitigation
Created 2020-03-28
231 commits to master branch, last one 2 years ago
9
80
apache-2.0
10
A Python toolkit for analyzing machine learning models and datasets.
Created 2023-04-29
2 commits to main branch, last one about a year ago
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper
Created 2021-03-02
2 commits to main branch, last one 3 years ago
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Created 2020-03-11
63 commits to master branch, last one 3 years ago
Papers and online resources related to machine learning fairness
Created 2021-07-17
70 commits to main branch, last one about a year ago
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
Created 2020-04-21
137 commits to master branch, last one about a year ago
Tilted Empirical Risk Minimization (ICLR '21)
Created 2020-07-02
19 commits to master branch, last one about a year ago
18
52
unknown
30
Talks & Workshops by the CODAIT team
Created 2019-09-23
205 commits to master branch, last one 3 years ago
Official implementation of our work "Collaborative Fairness in Federated Learning."
Created 2020-04-02
413 commits to master branch, last one 6 months ago
8
47
apache-2.0
7
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
This repository has been archived (exclude archived)
Created 2021-12-10
1,906 commits to develop branch, last one 6 months ago
EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Created 2022-10-08
53 commits to main branch, last one 2 years ago
A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.
Created 2021-12-20
26 commits to main branch, last one 2 years ago
3
34
apache-2.0
6
Evidence-based tools and community collaboration to end algorithmic bias, one data scientist at a time.
Created 2022-09-21
311 commits to main branch, last one about a year ago
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Created 2022-03-12
51 commits to main branch, last one about a month ago
A fairness library in PyTorch.
Created 2024-01-31
69 commits to main branch, last one 5 months ago