3 results found Sort:

251
2.2k
apache-2.0
34
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Created 2020-07-21
3,579 commits to master branch, last one a day ago
27
234
apache-2.0
9
This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
Created 2024-03-06
255 commits to main branch, last one a day ago
19
222
apache-2.0
10
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
Created 2024-01-04
280 commits to main branch, last one a day ago