2 results found Sort:

258
2.3k
apache-2.0
33
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Created 2020-07-21
3,699 commits to master branch, last one 2 days ago
40
365
apache-2.0
9
[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
Created 2024-03-06
421 commits to main branch, last one 2 days ago