32 results found Sort:
- Filter by Primary Language:
- Python (14)
- Go (5)
- Jupyter Notebook (5)
- C++ (2)
- JavaScript (1)
- Shell (1)
- TypeScript (1)
- +
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a ...
Created
2023-07-17
1,771 commits to main branch, last one 6 days ago
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any...
Created
2023-06-14
1,003 commits to main branch, last one 14 hours ago
Data processing with ML, LLM and Vision LLM
Created
2022-01-08
505 commits to main branch, last one a day ago
📖A curated list of Awesome LLM/VLM Inference Papers with codes, such as FlashAttention, PagedAttention, Parallelism, etc. 🎉🎉
Created
2023-08-27
426 commits to main branch, last one 2 days ago
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention)
Created
2023-07-30
979 commits to main branch, last one 2 days ago
🔒 Enterprise-grade API gateway that helps you monitor and impose cost or rate limits per API key. Get fine-grained access control and monitoring per user, application, or environment. Supports OpenAI...
Created
2023-07-18
728 commits to main branch, last one 14 days ago
Evaluate your LLM's response with Prometheus and GPT4 💯
Created
2024-04-18
204 commits to main branch, last one 3 days ago
AI Inference Operator for Kubernetes
Created
2023-10-21
224 commits to main branch, last one 20 hours ago
Low latency JSON generation using LLMs ⚡️
Created
2023-11-15
76 commits to main branch, last one 8 months ago
[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
Created
2024-03-06
404 commits to main branch, last one a day ago
A large-scale simulation framework for LLM inference
Created
2023-11-02
23 commits to main branch, last one 14 days ago
The goal of RamaLama is to make working with AI boring.
Created
2024-07-24
947 commits to main branch, last one 6 days ago
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
Created
2023-07-03
324 commits to main branch, last one 3 days ago
TopicGPT: A Prompt-Based Framework for Topic Modeling (NAACL'24)
Created
2023-11-02
20 commits to main branch, last one 21 days ago
Setup and run a local LLM and Chatbot using consumer grade hardware.
Created
2023-09-12
345 commits to main branch, last one 23 days ago
LLM notes, including model inference, transformer model structure, and llm framework code analysis notes
Created
2024-09-18
166 commits to main branch, last one a day ago
Documentation on setting up an LLM server on Debian from scratch, using Ollama/vLLM, Open WebUI, OpenedAI Speech, and ComfyUI.
Created
2024-03-26
13 commits to main branch, last one about a month ago
【深度学习模型部署框架】支持tf/torch/trt/trtllm/vllm以及更多nn框架,支持dynamic batching、streaming模式,支持python/c++双语言,可限制,可拓展,高性能。帮助用户快速地将模型部署到线上,并通过http/rpc接口方式提供服务。
Created
2024-07-04
58 commits to master branch, last one 19 days ago
Booster - open accelerator for LLM models. Better inference and debugging for AI hackers
Created
2023-05-04
491 commits to main branch, last one 3 months ago
Framework agnostic computer vision inference. Run 1000+ models by changing only one line of code. Supports models from transformers, timm, ultralytics, vllm, ollama and your custom model.
Created
2024-10-10
284 commits to main branch, last one 6 days ago
Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"
This repository has been archived
(exclude archived)
Created
2024-03-05
22 commits to main branch, last one 7 months ago
Fine-tuning and serving LLMs on any cloud
This repository has been archived
(exclude archived)
Created
2023-07-30
44 commits to main branch, last one about a year ago
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource...
Created
2024-02-28
116 commits to main branch, last one 6 months ago
Fully-featured, beautiful web interface for vLLM - built with NextJS.
Created
2024-03-05
129 commits to main branch, last one 4 months ago
A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.
Created
2023-10-28
52 commits to master branch, last one 11 months ago
An endpoint server for efficiently serving quantized open-source LLMs for code.
Created
2023-09-25
3 commits to main branch, last one about a year ago
Efficient LLM inference on Slurm clusters using vLLM.
Created
2024-03-06
217 commits to main branch, last one 4 days ago
Extensible generative AI platform on Kubernetes with OpenAI-compatible APIs.
Created
2024-03-27
698 commits to main branch, last one 7 days ago
演示 vllm 对中文大语言模型的神奇效果
Created
2023-07-08
6 commits to master branch, last one about a year ago
☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!
Created
2023-11-20
250 commits to main branch, last one a day ago