LambdaLabsML / distributed-training-guide

Best practices & guides on how to write distributed pytorch training code

Date Created 2024-07-31 (5 months ago)
Commits 238 (last one 14 days ago)
Stargazers 322 (3 this week)
Watchers 7 (0 this week)
Forks 22
License mit
Ranking

RepositoryStats indexes 598,436 repositories, of these LambdaLabsML/distributed-training-guide is ranked #128,363 (79th percentile) for total stargazers, and #272,132 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #22,061/120,139.

LambdaLabsML/distributed-training-guide is also tagged with popular topics, for these it's ranked: pytorch (#1,871/6033),  gpu (#315/919),  cuda (#228/654)

Other Information

LambdaLabsML/distributed-training-guide has 1 open pull request on Github, 3 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 3 open issues and 37 closed issues.

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

238 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The primary language is Python but there's also others...

updated: 2024-12-30 @ 04:09am, id: 836331773 / R_kgDOMdlo_Q