53 results found Sort:
- Filter by Primary Language:
- Python (45)
- Jupyter Notebook (3)
- C++ (1)
- Rust (1)
- +
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Created
2020-01-23
2,625 commits to master branch, last one a day ago
Run Mixtral-8x7B models in Colab or consumer desktops
Created
2023-12-15
86 commits to master branch, last one 11 months ago
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
Created
2020-02-27
583 commits to master branch, last one about a month ago
Mixture-of-Experts for Large Vision-Language Models
Created
2023-12-14
228 commits to main branch, last one 18 days ago
Optimizing inference proxy for LLMs
Created
2024-08-22
267 commits to main branch, last one 22 days ago
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
Created
2019-07-19
30 commits to master branch, last one 8 months ago
Codebase for Aria - an Open Multimodal Native MoE
Created
2024-09-29
204 commits to main branch, last one 3 days ago
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
Created
2023-07-24
212 commits to main branch, last one 5 months ago
Tutel MoE: An Optimized Mixture-of-Experts Implementation
Created
2021-08-06
182 commits to main branch, last one 29 days ago
Surrogate Modeling Toolbox
Created
2016-11-08
1,568 commits to master branch, last one a day ago
A TensorFlow Keras implementation of "Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts" (KDD 2018)
Created
2018-09-10
22 commits to master branch, last one 3 years ago
A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models
Created
2020-07-13
33 commits to master branch, last one about a year ago
From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
Created
2024-01-22
116 commits to main branch, last one about a month ago
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
Created
2024-01-11
31 commits to main branch, last one 7 months ago
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
Created
2024-04-08
43 commits to main branch, last one 3 months ago
GMoE could be the next backbone model for many kinds of generalization task.
Created
2022-05-28
28 commits to main branch, last one about a year ago
Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch
Created
2023-03-26
68 commits to main branch, last one 6 months ago
Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch
Created
2023-08-04
28 commits to main branch, last one 8 months ago
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Created
2023-12-26
116 commits to main branch, last one 9 months ago
MoH: Multi-Head Attention as Mixture-of-Head Attention
Created
2024-10-08
19 commits to main branch, last one about a month ago
Fast Inference of MoE Models with CPU-GPU Orchestration
Created
2024-02-05
49 commits to main branch, last one 7 months ago
MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts
Created
2024-10-08
11 commits to main branch, last one 2 months ago
[SIGIR'24] The official implementation code of MOELoRA.
Created
2023-10-19
21 commits to master branch, last one 5 months ago
A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).
Created
2023-08-17
137 commits to main branch, last one 18 days ago
Repository for "See More Details: Efficient Image Super-Resolution by Experts Mining", ICML 2024
Created
2024-02-05
31 commits to main branch, last one 5 months ago
Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind
Created
2024-07-09
26 commits to main branch, last one 4 months ago
Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts
Created
2023-04-21
42 commits to main branch, last one 2 months ago
PyTorch library for cost-effective, fast and easy serving of MoE models.
Created
2024-01-22
16 commits to main branch, last one 4 months ago
[NeurIPS 2024] RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models
Created
2024-02-20
17 commits to main branch, last one about a month ago
[NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks
Created
2024-06-07
23 commits to main branch, last one 28 days ago