4 results found Sort:

248
2.1k
apache-2.0
34
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Created 2020-07-21
3,480 commits to master branch, last one a day ago
row-major matmul optimization
Created 2018-10-28
158 commits to master branch, last one 9 months ago
33
303
apache-2.0
8
An innovative library for efficient LLM inference via low-bit quantization
Created 2023-11-20
344 commits to main branch, last one 4 days ago
16
105
apache-2.0
9
SOTA Weight-only Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
Created 2024-01-04
208 commits to main branch, last one 2 days ago