xyjigsaw / LLM-Pretrain-SFT

Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)

Date Created 2023-11-14 (about a year ago)
Commits 21 (last one 10 months ago)
Stargazers 72 (0 this week)
Watchers 5 (0 this week)
Forks 14
License apache-2.0
Ranking

RepositoryStats indexes 595,856 repositories, of these xyjigsaw/LLM-Pretrain-SFT is ranked #367,915 (38th percentile) for total stargazers, and #335,688 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #69,645/119,431.

xyjigsaw/LLM-Pretrain-SFT is also tagged with popular topics, for these it's ranked: large-language-models (#750/1090),  llama (#391/547),  lora (#190/278)

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

21 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

Languages

The primary language is Python but there's also others...

updated: 2024-12-11 @ 05:59pm, id: 718523023 / R_kgDOKtPKjw