intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc

Date Created 2016-08-29 (8 years ago)
Commits 3,632 (last one 23 hours ago)
Stargazers 6,674 (6 this week)
Watchers 251 (0 this week)
Forks 1,260
License apache-2.0
Ranking

RepositoryStats indexes 579,238 repositories, of these intel-analytics/ipex-llm is ranked #6,557 (99th percentile) for total stargazers, and #5,031 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #976/115,001.

intel-analytics/ipex-llm is also tagged with popular topics, for these it's ranked: pytorch (#132/5908),  llm (#105/2654),  gpu (#30/894),  transformers (#25/817)

Other Information

intel-analytics/ipex-llm has 281 open pull requests on Github, 8,469 pull requests have been merged over the lifetime of the repository.

Github issues are enabled, there are 995 open issues and 1,609 closed issues.

There have been 19 releases, the latest one was published on 2024-08-22 (2 months ago) with the name IPEX-LLM release 2.1.0.

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

3,118 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The primary language is Python but there's also others...

updated: 2024-11-07 @ 12:25am, id: 66823715 / R_kgDOA_umIw