Trending repositories for topic explainable-ai
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
Causal discovery algorithms and tools for implementing new ones
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Examples of Data Science projects and Artificial Intelligence use-cases
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
A collection of research materials on explainable AI/ML
Debugging, monitoring and visualization for Python Machine Learning and Data Science
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
Causal discovery algorithms and tools for implementing new ones
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Examples of Data Science projects and Artificial Intelligence use-cases
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
A collection of research materials on explainable AI/ML
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
A collection of research materials on explainable AI/ML
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Interpretability and explainability of data and machine learning models
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Generate Diverse Counterfactual Explanations for any machine learning model.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
GraphXAI: Resource to support the development and evaluation of GNN explainers
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Time series explainability via self-supervised model behavior consistency
Official code for NeurIPS 2022 paper https://arxiv.org/abs/2208.00780 Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
Dataset and code for "Explainable Automated Fact-Checking for Public Health Claims" from EMNLP 2020.
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
GraphXAI: Resource to support the development and evaluation of GNN explainers
Causal discovery algorithms and tools for implementing new ones
[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Diffusion attentive attribution maps for interpreting Stable Diffusion.
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
A collection of research materials on explainable AI/ML
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
A toolkit for quantitative evaluation of data attribution methods.
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models
Time series explainability via self-supervised model behavior consistency
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
This is an open-source tool to assess and improve the trustworthiness of AI systems.
[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
Dataset and code for "Explainable Automated Fact-Checking for Public Health Claims" from EMNLP 2020.
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
PyTorch Explain: Interpretable Deep Learning in Python.
Causal discovery algorithms and tools for implementing new ones
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform ...
SIDU: SImilarity Difference and Uniqueness method for explainable AI
[ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
A collection of research materials on explainable AI/ML
Interpretability and explainability of data and machine learning models
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Generate Diverse Counterfactual Explanations for any machine learning model.
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Distributed High-Performance Symbolic Regression in Julia
[ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
Carefully curated list of awesome data science resources.
SIDU: SImilarity Difference and Uniqueness method for explainable AI
Time series explainability via self-supervised model behavior consistency
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
Real-time explainable machine learning for business optimisation
A PyTorch implementation of constrained optimization and modeling techniques
Causal discovery algorithms and tools for implementing new ones
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..
Neatly packaged AI methods for explainable ECG analysis
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
This is an open-source tool to assess and improve the trustworthiness of AI systems.
PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)
Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper "Learning to Receive Help: Intervention-Aware Concept Embeddin...
In this project, I have utilized survival analysis models to see how the likelihood of the customer churn changes over time and to calculate customer LTV. I have also implemented the Random Forest mod...