hila-chefer / Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

Date Created 2021-03-23 (3 years ago)
Commits 77 (last one about a year ago)
Stargazers 809 (1 this week)
Watchers 8 (0 this week)
Forks 107
License mit
Ranking

RepositoryStats indexes 595,856 repositories, of these hila-chefer/Transformer-MM-Explainability is ranked #62,734 (89th percentile) for total stargazers, and #246,776 for total watchers. Github reports the primary language for this repository as Jupyter Notebook, for repositories using this language it is ranked #1,320/17,543.

hila-chefer/Transformer-MM-Explainability is also tagged with popular topics, for these it's ranked: visualization (#314/1591),  transformer (#150/1018),  transformers (#158/849),  explainable-ai (#22/171),  interpretability (#26/169)

Other Information

hila-chefer/Transformer-MM-Explainability has 3 open pull requests on Github, 2 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 9 open issues and 27 closed issues.

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

16 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The primary language is Jupyter Notebook but there's also others...

updated: 2024-12-20 @ 08:02am, id: 350871478 / R_kgDOFOnftg