RLHFlow / RLHF-Reward-Modeling

Recipes to train reward model for RLHF.

Date Created 2024-03-21 (7 months ago)
Commits 90 (last one 3 days ago)
Stargazers 785 (3 this week)
Watchers 20 (0 this week)
Forks 65
License apache-2.0
Ranking

RepositoryStats indexes 579,238 repositories, of these RLHFlow/RLHF-Reward-Modeling is ranked #63,567 (89th percentile) for total stargazers, and #110,375 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #9,942/115,001.

RLHFlow/RLHF-Reward-Modeling is also tagged with popular topics, for these it's ranked: llm (#484/2654),  llama3 (#38/144)

Other Information

RLHFlow/RLHF-Reward-Modeling has 2 open pull requests on Github, 5 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 10 open issues and 21 closed issues.

Homepage URL: https://rlhflow.github.io/

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

90 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The only known language in this repository is Python

updated: 2024-11-06 @ 10:57pm, id: 775286168 / R_kgDOLjXtmA