Trending repositories for topic interpretability
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
The nnsight package enables interpreting and manipulating the internals of deep learned models.
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
A JAX research toolkit for building, editing, and visualizing neural networks.
A curated list of awesome responsible machine learning resources.
Sparse and discrete interpretability tool for neural networks
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Sparse and discrete interpretability tool for neural networks
The nnsight package enables interpreting and manipulating the internals of deep learned models.
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
A JAX research toolkit for building, editing, and visualizing neural networks.
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
A game theoretic approach to explain the output of any machine learning model.
Fit interpretable models. Explain blackbox machine learning.
A curated list of awesome responsible machine learning resources.
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
A curated list of awesome responsible machine learning resources.
A collection of research materials on explainable AI/ML
A JAX research toolkit for building, editing, and visualizing neural networks.
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
A collection of infrastructure and tools for research in neural network interpretability.
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024
The nnsight package enables interpreting and manipulating the internals of deep learned models.
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024
Sparse and discrete interpretability tool for neural networks
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
The nnsight package enables interpreting and manipulating the internals of deep learned models.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
A collection of research materials on explainable AI/ML
A JAX research toolkit for building, editing, and visualizing neural networks.
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
A game theoretic approach to explain the output of any machine learning model.
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
A JAX research toolkit for building, editing, and visualizing neural networks.
The nnsight package enables interpreting and manipulating the internals of deep learned models.
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
Scikit-learn friendly library to interpret, and prompt-engineer text datasets using large language models.
A curated list of awesome responsible machine learning resources.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024
A collection of research materials on explainable AI/ML
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024
Scikit-learn friendly library to interpret, and prompt-engineer text datasets using large language models.
The nnsight package enables interpreting and manipulating the internals of deep learned models.
Sparse and discrete interpretability tool for neural networks
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Wanna know what your model sees? Here's a package for applying EigenCAM on the new YOLO V8 model
A JAX research toolkit for building, editing, and visualizing neural networks.
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
An Open-Source Library for the interpretability of time series classifiers
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"
GraphXAI: Resource to support the development and evaluation of GNN explainers
A JAX research toolkit for building, editing, and visualizing neural networks.
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
The nnsight package enables interpreting and manipulating the internals of deep learned models.
This repository introduces MentaLLaMA, the first open-source instruction following large language model for interpretable mental health analysis.
Linear probe found representations of scene attributes in a text-to-image diffusion model
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
A game theoretic approach to explain the output of any machine learning model.
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
A JAX research toolkit for building, editing, and visualizing neural networks.
A curated list of awesome responsible machine learning resources.
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
A collection of research materials on explainable AI/ML
The nnsight package enables interpreting and manipulating the internals of deep learned models.
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Linear probe found representations of scene attributes in a text-to-image diffusion model
CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024
Scikit-learn friendly library to interpret, and prompt-engineer text datasets using large language models.
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
A simple PyTorch implementation of influence functions.
Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023
Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper "Learning to Receive Help: Intervention-Aware Concept Embeddin...
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
🧠 Starter templates for doing interpretability research
The source code of paper: Trend attention fully convolutional network for remaining useful life estimation in the turbofan engine PHM of CMAPSS dataset. Signal selection, Attention mechanism, and Inte...
PyTorch Explain: Interpretable Deep Learning in Python.