RahulSChand / gpu_poor

Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization

Date Created 2023-09-12 (about a year ago)
Commits 66 (last one 10 months ago)
Stargazers 783 (4 this week)
Watchers 5 (0 this week)
Forks 38
License unknown
Ranking

RepositoryStats indexes 565,279 repositories, of these RahulSChand/gpu_poor is ranked #62,799 (89th percentile) for total stargazers, and #326,700 for total watchers. Github reports the primary language for this repository as JavaScript, for repositories using this language it is ranked #8,769/63,943.

RahulSChand/gpu_poor is also tagged with popular topics, for these it's ranked: pytorch (#979/5822),  llm (#448/2462),  gpu (#174/875),  llama (#119/489),  language-model (#104/487),  huggingface (#56/361),  llama2 (#45/238)

Other Information

RahulSChand/gpu_poor has Github issues enabled, there are 7 open issues and 7 closed issues.

There have been 1 release, the latest one was published on 2023-10-29 (11 months ago) with the name Added way to calculate ~token/s & training time.

Homepage URL: https://rahulschand.github.io/gpu_poor/

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

66 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The primary language is JavaScript but there's also others...

updated: 2024-09-28 @ 01:55pm, id: 690511522 / R_kgDOKSheog