3 results found Sort:

20
201
apache-2.0
6
Flux diffusion model implementation using quantized fp8 matmul & remaining layers use faster half precision accumulate, which is ~2x faster on consumer devices.
Created 2024-08-05
64 commits to main branch, last one 25 days ago
[NeurIPS'23] Speculative Decoding with Big Little Decoder
Created 2023-02-10
11,217 commits to main branch, last one 9 months ago
This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"
Created 2024-06-11
8 commits to master branch, last one 3 months ago