Trending repositories for topic explainable-ai
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Curated list of open source tooling for data-centric AI on unstructured data.
A collection of research materials on explainable AI/ML
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Curated list of open source tooling for data-centric AI on unstructured data.
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
A collection of research materials on explainable AI/ML
Fit interpretable models. Explain blackbox machine learning.
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Curated list of open source tooling for data-centric AI on unstructured data.
Variants of Vision Transformer and its downstream tasks
GraphXAI: Resource to support the development and evaluation of GNN explainers
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Causal discovery algorithms and tools for implementing new ones
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Variants of Vision Transformer and its downstream tasks
GraphXAI: Resource to support the development and evaluation of GNN explainers
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Causal discovery algorithms and tools for implementing new ones
Curated list of open source tooling for data-centric AI on unstructured data.
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Distributed High-Performance Symbolic Regression in Julia
A collection of research materials on explainable AI/ML
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classif...
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Interpretability and explainability of data and machine learning models
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classif...
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
Time series explainability via self-supervised model behavior consistency
PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
This is an open-source tool to assess and improve the trustworthiness of AI systems.
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
Distributed High-Performance Symbolic Regression in Julia
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
[ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform ...
SIDU: SImilarity Difference and Uniqueness method for explainable AI
[ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
A collection of research materials on explainable AI/ML
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
Interpretability and explainability of data and machine learning models
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Distributed High-Performance Symbolic Regression in Julia
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Generate Diverse Counterfactual Explanations for any machine learning model.
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
A list of (post-hoc) XAI for time series
[ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection
SIDU: SImilarity Difference and Uniqueness method for explainable AI
Carefully curated list of awesome data science resources.
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
Time series explainability via self-supervised model behavior consistency
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
Causal discovery algorithms and tools for implementing new ones
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
Neatly packaged AI methods for explainable ECG analysis
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
This is an open-source tool to assess and improve the trustworthiness of AI systems.
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper "Learning to Receive Help: Intervention-Aware Concept Embeddin...
PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)