laura-rieger / deep-explanation-penalization

Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584

Date Created 2019-02-12 (5 years ago)
Commits 89 (last one 3 years ago)
Stargazers 125 (0 this week)
Watchers 9 (0 this week)
Forks 14
License mit
Ranking

RepositoryStats indexes 565,600 repositories, of these laura-rieger/deep-explanation-penalization is ranked #243,569 (57th percentile) for total stargazers, and #221,272 for total watchers. Github reports the primary language for this repository as Jupyter Notebook, for repositories using this language it is ranked #5,764/16,305.

laura-rieger/deep-explanation-penalization is also tagged with popular topics, for these it's ranked: python (#10,986/21424),  deep-learning (#4,421/8173),  machine-learning (#4,118/7699),  pytorch (#3,121/5824),  ai (#1,757/3591),  data-science (#1,172/2055),  artificial-intelligence (#1,027/1957),  neural-network (#618/1072),  jupyter-notebook (#331/598),  ml (#324/582),  explainable-ai (#60/163),  interpretability (#71/154)

Other Information

laura-rieger/deep-explanation-penalization has Github issues enabled, there is 1 open issue and 12 closed issues.

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

0 commits on the default branch (master) since jan '22

Inactive

No recent commits to this repository

Yearly Commits

Commits to the default branch (master) per year

Issue History

Languages

The primary language is Jupyter Notebook but there's also others...

Opengraph Image
laura-rieger/deep-explanation-penalization

updated: 2024-09-17 @ 03:06pm, id: 170393886 / R_kgDOCigBHg