Statistics for topic fine-tuning
RepositoryStats tracks 595,857 Github repositories, of these 189 are tagged with the fine-tuning topic. The most common primary language for repositories using this topic is Python (124). Other languages include: Jupyter Notebook (38)
Stargazers over time for topic fine-tuning
Most starred repositories for topic fine-tuning (view more)
Trending repositories for topic fine-tuning (view more)
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
A fine-tuned model from Qwen2.5-1.5B-Instruct, capable of handling sensitive topics. / 从 Qwen2.5-1.5B-Instruct 微调,主要擅长处理色情话题
Kiln AI: the easiest tool for fine-tuning LLM models, synthetic data generation, and collaborating on datasets.
Create synthetic datasets for training and testing Language Learning Models (LLMs) in a Question-Answering (QA) context.
The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
This repository provides programs to build Retrieval Augmented Generation (RAG) code for Generative AI with LlamaIndex, Deep Lake, and Pinecone leveraging the power of OpenAI and Hugging Face models f...
This repository provides programs to build Retrieval Augmented Generation (RAG) code for Generative AI with LlamaIndex, Deep Lake, and Pinecone leveraging the power of OpenAI and Hugging Face models f...
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
Kiln AI: the easiest tool for fine-tuning LLM models, synthetic data generation, and collaborating on datasets.
Create synthetic datasets for training and testing Language Learning Models (LLMs) in a Question-Answering (QA) context.
A fine-tuned model from Qwen2.5-1.5B-Instruct, capable of handling sensitive topics. / 从 Qwen2.5-1.5B-Instruct 微调,主要擅长处理色情话题
A C++ implementation of Open Interpreter, based on llama.cpp. / Open Interpreter 的 C++ 实现,基于 llama.cpp
🚀LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
🚀LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models
This repository provides programs to build Retrieval Augmented Generation (RAG) code for Generative AI with LlamaIndex, Deep Lake, and Pinecone leveraging the power of OpenAI and Hugging Face models f...
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
A JAX research toolkit for building, editing, and visualizing neural networks.
LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
The official implementation of Self-Play Preference Optimization (SPPO)
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models