Trending repositories for topic explainable-ai
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Distributed High-Performance Symbolic Regression in Julia
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Distributed High-Performance Symbolic Regression in Julia
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
Fit interpretable models. Explain blackbox machine learning.
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Distributed High-Performance Symbolic Regression in Julia
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
Generate Diverse Counterfactual Explanations for any machine learning model.
A collection of research materials on explainable AI/ML
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy...
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Distributed High-Performance Symbolic Regression in Julia
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy...
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Examples of Data Science projects and Artificial Intelligence use-cases
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Distributed High-Performance Symbolic Regression in Julia
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
A collection of research materials on explainable AI/ML
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
Generate Diverse Counterfactual Explanations for any machine learning model.
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Interpretability and explainability of data and machine learning models
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
A toolkit for quantitative evaluation of data attribution methods.
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Time series explainability via self-supervised model behavior consistency
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
This is an open-source tool to assess and improve the trustworthiness of AI systems.
Fused Window Transformers for fMRI Time Series Analysis (https://www.sciencedirect.com/science/article/pii/S1361841523001019)
[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
PyTorch Explain: Interpretable Deep Learning in Python.
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
Causal discovery algorithms and tools for implementing new ones
Dataset and code for "Explainable Automated Fact-Checking for Public Health Claims" from EMNLP 2020.
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform ...
SIDU: SImilarity Difference and Uniqueness method for explainable AI
[ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support ...
A collection of research materials on explainable AI/ML
Interpretability and explainability of data and machine learning models
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Generate Diverse Counterfactual Explanations for any machine learning model.
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Distributed High-Performance Symbolic Regression in Julia
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
[ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
Carefully curated list of awesome data science resources.
SIDU: SImilarity Difference and Uniqueness method for explainable AI
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
Time series explainability via self-supervised model behavior consistency
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
Real-time explainable machine learning for business optimisation
Causal discovery algorithms and tools for implementing new ones
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..
Neatly packaged AI methods for explainable ECG analysis
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustwor...
This is an open-source tool to assess and improve the trustworthiness of AI systems.
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)
Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper "Learning to Receive Help: Intervention-Aware Concept Embeddin...
In this project, I have utilized survival analysis models to see how the likelihood of the customer churn changes over time and to calculate customer LTV. I have also implemented the Random Forest mod...
Helping AI practitioners better understand their datasets and models in text classification. From ServiceNow.