SqueezeAILab / KVQuant

[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

Date Created 2024-01-31 (10 months ago)
Commits 12 (last one 5 months ago)
Stargazers 315 (2 this week)
Watchers 12 (0 this week)
Forks 27
License unknown
Ranking

RepositoryStats indexes 595,890 repositories, of these SqueezeAILab/KVQuant is ranked #129,794 (78th percentile) for total stargazers, and #178,815 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #22,334/119,419.

SqueezeAILab/KVQuant is also tagged with popular topics, for these it's ranked: llm (#931/2911),  natural-language-processing (#475/1429),  large-language-models (#360/1090),  transformer (#307/1018),  llama (#223/545),  compression (#133/451),  text-generation (#56/161),  model-compression (#34/108)

Other Information

SqueezeAILab/KVQuant has 1 open pull request on Github, 0 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 13 open issues and 4 closed issues.

Homepage URL: https://arxiv.org/abs/2401.18079

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

12 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The primary language is Python but there's also others...

updated: 2024-12-19 @ 02:50pm, id: 750973875 / R_kgDOLMLzsw