PaulPauls / llama3_interpretability_sae

A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.

Date Created 2024-11-21 (about a month ago)
Commits 2 (last one 27 days ago)
Stargazers 604 (-1 this week)
Watchers 5 (0 this week)
Forks 32
License unknown
This repository has been archived on Github
Ranking

RepositoryStats indexes 597,824 repositories, of these PaulPauls/llama3_interpretability_sae is ranked #79,458 (87th percentile) for total stargazers, and #336,211 for total watchers.

PaulPauls/llama3_interpretability_sae is also tagged with popular topics, for these it's ranked: pytorch (#1,218/6035),  llama3 (#43/170)

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

2 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

We don't have any language data for this repository

It's a mystery

updated: 2024-12-23 @ 07:33pm, id: 892179639 / R_kgDONS2Utw