Statistics for topic lora
RepositoryStats tracks 595,856 Github repositories, of these 278 are tagged with the lora topic. The most common primary language for repositories using this topic is Python (127). Other languages include: C++ (35), C (32), Jupyter Notebook (25)
Stargazers over time for topic lora
Most starred repositories for topic lora (view more)
Trending repositories for topic lora (view more)
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
Consistency Distillation with Target Timestep Selection and Decoupled Guidance
End to End Generative AI Industry Projects on LLM Models with Deployment_Awesome LLM Projects
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, ...) or 100+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL...
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
End to End Generative AI Industry Projects on LLM Models with Deployment_Awesome LLM Projects
Consistency Distillation with Target Timestep Selection and Decoupled Guidance
A toolbox for deep learning model deployment using C++ YoloX | YoloV7 | YoloV8 | Gan | OCR | MobileVit | Scrfd | MobileSAM | StableDiffusion
SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, ...) or 100+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL...
SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Codebase for "CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation"
Monitor your meshtastic network with Meshtastic Prometheus exporter
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon