PaulPauls / llama3_interpretability_sae

A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.

Date Created 2024-11-21 (a day ago)
Commits 3 (last one 18 hours ago)
Stargazers 486 (486 this week)
Watchers 3 (2 this week)
Forks 18
License unknown
Ranking

RepositoryStats indexes 585,332 repositories, of these PaulPauls/llama3_interpretability_sae is ranked #154,646 (74th percentile) for total stargazers, and #536,654 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #27,013/116,570.

PaulPauls/llama3_interpretability_sae is also tagged with popular topics, for these it's ranked: pytorch (#2,178/5954),  llama3 (#64/156)

Other Information

There have been 1 release, the latest one was published on 2024-11-21 (a day ago) with the name Initial Release [v0.2.0].

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

2 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The only known language in this repository is Python

updated: 2024-11-23 @ 10:02am, id: 892179639 / R_kgDONS2Utw