Statistics for topic llama
RepositoryStats tracks 533,836 Github repositories, of these 434 are tagged with the llama topic. The most common primary language for repositories using this topic is Python (229). Other languages include: Jupyter Notebook (33), TypeScript (23), C++ (21), JavaScript (17), Rust (15), Go (13)
Stargazers over time for topic llama
Most starred repositories for topic llama (view more)
Trending repositories for topic llama (view more)
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
A high-throughput and memory-efficient inference and serving engine for LLMs
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Turn natual language into commands. Your CLI tasks, now as easy as a conversation. Run it 100% offline, or use OpenAI's models.
Self-paced bootcamp on Generative AI. Tutorials on ML fundamentals, LLMs, RAGs, LangChain, Fine-tuning & AI Agents (CrewAI)
Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.
Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! 🐫
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
A high-throughput and memory-efficient inference and serving engine for LLMs
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
A low-latency & high-throughput serving engine for LLMs
Turn natual language into commands. Your CLI tasks, now as easy as a conversation. Run it 100% offline, or use OpenAI's models.
Self-paced bootcamp on Generative AI. Tutorials on ML fundamentals, LLMs, RAGs, LangChain, Fine-tuning & AI Agents (CrewAI)
Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.
Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
Official PyTorch Implementation for the "Model Tree Heritage Recovery" paper.
CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
LSP-AI is an open-source language server that serves as a backend for AI-powered functionality, designed to assist and empower software engineers, not replace them.
A high-throughput and memory-efficient inference and serving engine for LLMs
LSP-AI is an open-source language server that serves as a backend for AI-powered functionality, designed to assist and empower software engineers, not replace them.
Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.
CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a...
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
A high-throughput and memory-efficient inference and serving engine for LLMs
Langchain-Chatchat(原Langchain-ChatGLM, Qwen 与 Llama 等)基于 Langchain 与 ChatGLM 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and ...
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.