2 results found Sort:

248
2.1k
apache-2.0
34
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Created 2020-07-21
3,480 commits to master branch, last one a day ago
9
116
apache-2.0
8
This is the official PyTorch implementation of "LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models", and also an efficient LLM compression tool w...
Created 2024-03-06
62 commits to main branch, last one 4 days ago