nikhilbarhate99 / PPO-PyTorch

Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch

Date Created 2018-09-27 (5 years ago)
Commits 98 (last one 5 months ago)
Stargazers 1,516 (6 this week)
Watchers 9 (0 this week)
Forks 328
License mit
Ranking

RepositoryStats indexes 523,840 repositories, of these nikhilbarhate99/PPO-PyTorch is ranked #32,179 (94th percentile) for total stargazers, and #215,092 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #4,809/100,813.

nikhilbarhate99/PPO-PyTorch is also tagged with popular topics, for these it's ranked: deep-learning (#890/7720),  pytorch (#551/5505),  reinforcement-learning (#111/1166),  deep-reinforcement-learning (#40/332)

Other Information

nikhilbarhate99/PPO-PyTorch has 1 open pull request on Github, 6 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 13 open issues and 46 closed issues.

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

5 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

Languages

The only known language in this repository is Python

updated: 2024-05-31 @ 08:52pm, id: 150605839 / R_kgDOCPoQDw