12 results found Sort:

469
4.6k
bsd-3-clause
245
Model interpretability and understanding for PyTorch
Created 2019-08-27
1,092 commits to master branch, last one 16 hours ago
Leave One Feature Out Importance
Created 2019-01-14
32 commits to master branch, last one 4 months ago
ProphitBet is a Machine Learning Soccer Bet prediction application. It analyzes the form of teams, computes match statistics and predicts the outcomes of a match using Advanced Machine Learning (ML) m...
Created 2022-03-18
212 commits to main branch, last one 25 days ago
This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal...
Created 2018-12-05
515 commits to master branch, last one 9 months ago
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Created 2018-05-18
60 commits to master branch, last one 2 years ago
In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Created 2019-05-07
50 commits to master branch, last one about a year ago
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Created 2019-02-12
89 commits to master branch, last one 3 years ago
A Julia package for interpretable machine learning with stochastic Shapley values
Created 2020-01-23
118 commits to master branch, last one 2 years ago
Adding feature_importances_ property to sklearn.cluster.KMeans class
Created 2021-07-17
22 commits to kmeans-feature-importance-v01 branch, last one 9 months ago
SHAP Interaction Quantification (short SHAP-IQ) is an XAI framework extending on the well-known shap explanations by introducing interactions i.e. synergy scores.
Created 2023-10-17
558 commits to main branch, last one 18 hours ago