4 results found Sort:

321
1.9k
apache-2.0
35
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization i...
Created 2022-09-20
803 commits to main branch, last one 12 hours ago
43
522
mit
11
Microsoft Automatic Mixed Precision Library
Created 2023-01-30
98 commits to main branch, last one 2 months ago
38
348
apache-2.0
8
An innovative library for efficient LLM inference via low-bit quantization
This repository has been archived (exclude archived)
Created 2023-11-20
345 commits to main branch, last one 2 months ago
20
201
apache-2.0
6
Flux diffusion model implementation using quantized fp8 matmul & remaining layers use faster half precision accumulate, which is ~2x faster on consumer devices.
Created 2024-08-05
64 commits to main branch, last one 25 days ago