Statistics for topic lora
RepositoryStats tracks 523,840 Github repositories, of these 230 are tagged with the lora topic. The most common primary language for repositories using this topic is Python (106). Other languages include: C++ (33), C (24), Jupyter Notebook (21)
Stargazers over time for topic lora
Most starred repositories for topic lora (view more)
Trending repositories for topic lora (view more)
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 25+ MLLMs
[ICML2024] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 25+ MLLMs
A Meshtastic desktop client, allowing simple, offline deployment and administration of an ad-hoc mesh communication network. Built in Rust and TypeScript.
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 25+ MLLMs
[ICML2024] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 25+ MLLMs
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 25+ MLLMs
[ICML2024] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 25+ MLLMs
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
雅意大模型:为客户打造安全可靠的专属大模型,基于大规模中英文多领域指令数据训练的 LlaMA 2 & BLOOM 系列模型,由中科闻歌算法团队研发。(Repo for YaYi Chinese LLMs based on LlaMA2 & BLOOM)
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
OneTrainer is a one-stop solution for all your stable diffusion training needs.