thu-ml / SageAttention

Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.

Date Created 2024-10-03 (2 months ago)
Commits 61 (last one 4 days ago)
Stargazers 700 (40 this week)
Watchers 18 (2 this week)
Forks 34
License apache-2.0
Ranking

RepositoryStats indexes 594,982 repositories, of these thu-ml/SageAttention is ranked #70,834 (88th percentile) for total stargazers, and #123,258 for total watchers. Github reports the primary language for this repository as Cuda, for repositories using this language it is ranked #37/356.

thu-ml/SageAttention is also tagged with popular topics, for these it's ranked: llm (#561/2893),  cuda (#130/652),  video-generation (#32/146)

Other Information

thu-ml/SageAttention has 1 open pull request on Github, 6 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 24 open issues and 39 closed issues.

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

61 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The primary language is Cuda but there's also others...

updated: 2024-12-18 @ 12:25am, id: 867007699 / R_kgDOM6180w