rail-berkeley / softlearning

Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

Date Created 2018-12-03 (5 years ago)
Commits 1,490 (last one 3 years ago)
Stargazers 1,200 (0 this week)
Watchers 37 (0 this week)
Forks 238
License other
Ranking

RepositoryStats indexes 565,600 repositories, of these rail-berkeley/softlearning is ranked #42,258 (93rd percentile) for total stargazers, and #57,873 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #6,500/111,362.

rail-berkeley/softlearning is also tagged with popular topics, for these it's ranked: deep-learning (#1,115/8173),  machine-learning (#1,135/7699),  reinforcement-learning (#136/1260),  deep-neural-networks (#94/618),  deep-reinforcement-learning (#51/353)

Other Information

rail-berkeley/softlearning has 14 open pull requests on Github, 66 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 39 open issues and 64 closed issues.

Homepage URL: https://sites.google.com/view/sac-and-applications

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

0 commits on the default branch (master) since jan '22

Inactive

No recent commits to this repository

Yearly Commits

Commits to the default branch (master) per year

Issue History

Languages

The primary language is Python but there's also others...

updated: 2024-09-27 @ 02:33am, id: 160139764 / R_kgDOCYuJ9A