madroidmaq / mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.
RepositoryStats indexes 619,774 repositories, of these madroidmaq/mlx-omni-server is ranked #154,695 (75th percentile) for total stargazers, and #304,012 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #27,368/125,413.
madroidmaq/mlx-omni-server has 1 open pull request on Github, 5 pull requests have been merged over the lifetime of the repository.
Github issues are enabled, there are 5 open issues and 11 closed issues.
There have been 7 releases, the latest one was published on 2025-02-05 (19 days ago) with the name v0.3.2.
Star History
Github stargazers over time
Watcher History
Github watchers over time, collection started in '23
Recent Commit History
131 commits on the default branch (main) since jan '22
Yearly Commits
Commits to the default branch (main) per year
Issue History
Languages
The only known language in this repository is Python
updated: 2025-02-24 @ 03:50pm, id: 883682853 / R_kgDONKvuJQ