intel-analytics / ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
RepositoryStats indexes 584,353 repositories, of these intel-analytics/ipex-llm is ranked #6,542 (99th percentile) for total stargazers, and #5,029 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #983/116,326.
intel-analytics/ipex-llm has 283 open pull requests on Github, 8,515 pull requests have been merged over the lifetime of the repository.
Github issues are enabled, there are 1,005 open issues and 1,615 closed issues.
There have been 19 releases, the latest one was published on 2024-08-22 (3 months ago) with the name IPEX-LLM release 2.1.0.
Star History
Github stargazers over time
Watcher History
Github watchers over time, collection started in '23
Recent Commit History
3,164 commits on the default branch (main) since jan '22
Yearly Commits
Commits to the default branch (main) per year
Issue History
Languages
The primary language is Python but there's also others...
updated: 2024-11-20 @ 06:04pm, id: 66823715 / R_kgDOA_umIw