madroidmaq / mlx-omni-server

MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.

Date Created 2024-11-05 (about a month ago)
Commits 93 (last one a day ago)
Stargazers 145 (34 this week)
Watchers 4 (0 this week)
Forks 5
License unknown
Ranking

RepositoryStats indexes 594,458 repositories, of these madroidmaq/mlx-omni-server is ranked #228,420 (62nd percentile) for total stargazers, and #376,884 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #41,590/118,961.

madroidmaq/mlx-omni-server is also tagged with popular topics, for these it's ranked: openai (#976/2238),  openai-api (#169/493),  tts (#247/477)

Other Information

madroidmaq/mlx-omni-server has Github issues enabled, there is 1 open issue and 3 closed issues.

There have been 3 releases, the latest one was published on 2024-12-16 (a day ago) with the name v0.2.0.

All Topics

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

93 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The only known language in this repository is Python

updated: 2024-12-17 @ 11:25pm, id: 883682853 / R_kgDONKvuJQ