6 results found Sort:
- Filter by Primary Language:
- Python (4)
- Cuda (2)
- +
Quantized Attention achieves speedup of 2-3x and 3-5x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
Created
2024-10-03
87 commits to main branch, last one 4 days ago
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
Created
2024-11-27
104 commits to main branch, last one 7 days ago
SpargeAttention: A training-free sparse attention that can accelerate any model inference.
Created
2025-02-25
46 commits to main branch, last one 3 days ago
[NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
Created
2024-05-31
64 commits to main branch, last one 2 months ago
⚡️ A fast and flexible PyTorch inference server that runs locally, on any cloud or AI HW.
Created
2023-04-16
358 commits to main branch, last one 10 months ago
This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"
Created
2024-06-11
8 commits to master branch, last one 9 months ago