RLHFlow / RLHF-Reward-Modeling

Recipes to train reward model for RLHF.

Date Created 2024-03-21 (3 months ago)
Commits 62 (last one a day ago)
Stargazers 359 (16 this week)
Watchers 6 (0 this week)
Forks 23
License apache-2.0
Ranking

RepositoryStats indexes 535,551 repositories, of these RLHFlow/RLHF-Reward-Modeling is ranked #111,412 (79th percentile) for total stargazers, and #285,644 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #18,406/103,734.

RLHFlow/RLHF-Reward-Modeling is also tagged with popular topics, for these it's ranked: llm (#613/2053)

Other Information

RLHFlow/RLHF-Reward-Modeling has 2 open pull requests on Github, 2 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 4 open issues and 12 closed issues.

Homepage URL: https://rlhflow.github.io/

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

62 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The only known language in this repository is Python

updated: 2024-07-03 @ 02:25pm, id: 775286168 / R_kgDOLjXtmA