PKU-Alignment / safe-rlhf

Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

Date Created 2023-05-15 (about a year ago)
Commits 111 (last one 7 months ago)
Stargazers 1,402 (2 this week)
Watchers 17 (0 this week)
Forks 118
License apache-2.0
Ranking

RepositoryStats indexes 609,392 repositories, of these PKU-Alignment/safe-rlhf is ranked #38,061 (94th percentile) for total stargazers, and #130,849 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #5,928/122,942.

PKU-Alignment/safe-rlhf is also tagged with popular topics, for these it's ranked: llm (#390/3118),  reinforcement-learning (#128/1367),  gpt (#168/1172),  large-language-models (#129/1131),  transformer (#97/1039),  transformers (#103/866),  llms (#83/580),  llama (#99/578),  datasets (#41/367)

Other Information

PKU-Alignment/safe-rlhf has Github issues enabled, there are 15 open issues and 73 closed issues.

Homepage URL: https://pku-beaver.github.io

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

111 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The primary language is Python but there's also others...

updated: 2025-01-30 @ 07:37am, id: 640914148 / R_kgDOJjOS5A