Trending repositories for topic explainable-ai
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Distributed High-Performance Symbolic Regression in Julia
Generate Diverse Counterfactual Explanations for any machine learning model.
Generative AI SDK for Web to create AI Agents for apps built with JavaScript, React, Angular, Vue, Ember, Electron
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Distributed High-Performance Symbolic Regression in Julia
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Generate Diverse Counterfactual Explanations for any machine learning model.
Generative AI SDK for Web to create AI Agents for apps built with JavaScript, React, Angular, Vue, Ember, Electron
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Distributed High-Performance Symbolic Regression in Julia
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Generative AI SDK for Web to create AI Agents for apps built with JavaScript, React, Angular, Vue, Ember, Electron
In this project, I have utilized survival analysis models to see how the likelihood of the customer churn changes over time and to calculate customer LTV. I have also implemented the Random Forest mod...
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podcasts, reports, tools, regulations and standards related to Resp...
[ACMMM Oral, 2023] "Towards Explainable In-the-wild Video Quality Assessment: A Database and a Language-Prompted Approach"
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podcasts, reports, tools, regulations and standards related to Resp...
[ACMMM Oral, 2023] "Towards Explainable In-the-wild Video Quality Assessment: A Database and a Language-Prompted Approach"
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..
In this project, I have utilized survival analysis models to see how the likelihood of the customer churn changes over time and to calculate customer LTV. I have also implemented the Random Forest mod...
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
An Open-Source Library for the interpretability of time series classifiers
Distributed High-Performance Symbolic Regression in Julia
Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
OpenXAI : Towards a Transparent Evaluation of Model Explanations
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
A collection of research materials on explainable AI/ML
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Generate Diverse Counterfactual Explanations for any machine learning model.
Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.
[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podcasts, reports, tools, regulations and standards related to Resp...
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Paper and Dataset Summary for paper "Explainable Anomaly Detection in Images and Videos: A Survey"
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper "Learning to Receive Help: Intervention-Aware Concept Embeddin...
This is an open-source tool to assess and improve the trustworthiness of AI systems.
Pixel-Level Face Image Quality Assessment for Explainable Face Recognition
Variants of Vision Transformer and its downstream tasks
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
In this project, I have utilized survival analysis models to see how the likelihood of the customer churn changes over time and to calculate customer LTV. I have also implemented the Random Forest mod...
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
SIDU: SImilarity Difference and Uniqueness method for explainable AI
[NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".
[ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models
Paper and Dataset Summary for paper "Explainable Anomaly Detection in Images and Videos: A Survey"
Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libr...
A collection of research materials on explainable AI/ML
Distributed High-Performance Symbolic Regression in Julia
Interpretability and explainability of data and machine learning models
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Diffusion attentive attribution maps for interpreting Stable Diffusion.
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-bas...
Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper).
Generate Diverse Counterfactual Explanations for any machine learning model.
[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
SIDU: SImilarity Difference and Uniqueness method for explainable AI
[ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection
Real-time Intrusion Detection System implementing Machine Learning. We combine Supervised Learning (RF) for detecting known attacks from CICIDS 2018 & SCVIC-APT datasets, and Unsupervised Learning (AE...
Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.
Carefully curated list of awesome data science resources.
Time series explainability via self-supervised model behavior consistency
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
Causal discovery algorithms and tools for implementing new ones
A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podcasts, reports, tools, regulations and standards related to Resp...
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Explainable Reinforcement Learning (XRL) Resources
Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper "Learning to Receive Help: Intervention-Aware Concept Embeddin...
This is an open-source tool to assess and improve the trustworthiness of AI systems.
Main folder. Material related to my books on synthetic data and generative AI. Also contains documents blending components from several folders, or covering topics spanning across multiple folders..