Bruce-Lee-LY / decoding_attention

Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.

Date Created 2024-08-14 (4 months ago)
Commits 1 (last one about a month ago)
Stargazers 28 (1 this week)
Watchers 2 (0 this week)
Forks 1
License bsd-3-clause
Ranking

RepositoryStats indexes 597,394 repositories, of these Bruce-Lee-LY/decoding_attention is ranked #582,835 (2nd percentile) for total stargazers, and #486,158 for total watchers. Github reports the primary language for this repository as C++, for repositories using this language it is ranked #31,342/31,920.

Bruce-Lee-LY/decoding_attention is also tagged with popular topics, for these it's ranked: llm (#2,788/2935),  gpu (#907/918),  cuda (#642/654),  nvidia (#308/313),  inference (#304/310)

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

1 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

No issues have been posted

Languages

The primary language is C++ but there's also others...

updated: 2024-12-26 @ 01:59am, id: 842468267 / R_kgDOMjcLqw