Statistics for topic lora
RepositoryStats tracks 579,129 Github repositories, of these 269 are tagged with the lora topic. The most common primary language for repositories using this topic is Python (124). Other languages include: C++ (35), C (32), Jupyter Notebook (24)
Stargazers over time for topic lora
Most starred repositories for topic lora (view more)
Trending repositories for topic lora (view more)
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vis...
An addon module for portapack to add extra features to it for more fun.
Consistency Distillation with Target Timestep Selection and Decoupled Guidance
Realtime web UI to run against a Meshtastic regional or private mesh network.
Codebase for "CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation"
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vis...
An addon module for portapack to add extra features to it for more fun.
Consistency Distillation with Target Timestep Selection and Decoupled Guidance
Realtime web UI to run against a Meshtastic regional or private mesh network.
Codebase for "CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation"
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
Codebase for "CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation"
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
End to End Generative AI Industry Projects on LLM Models with Deployment_Awesome LLM Projects
Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.