6 results found Sort:

73
1.2k
apache-2.0
25
Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Created 2024-10-03
84 commits to main branch, last one 2 days ago
20
547
apache-2.0
7
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
Created 2024-11-27
97 commits to main branch, last one 5 days ago
16
324
apache-2.0
4
SpargeAttention: A training-free sparse attention that can accelerate any model inference.
Created 2025-02-25
26 commits to main branch, last one 7 days ago
12
191
apache-2.0
3
[NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
Created 2024-05-31
64 commits to main branch, last one 28 days ago
12
138
apache-2.0
1
⚡️ A fast and flexible PyTorch inference server that runs locally, on any cloud or AI HW.
Created 2023-04-16
358 commits to main branch, last one 9 months ago
This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"
Created 2024-06-11
8 commits to master branch, last one 8 months ago