148 results found Sort:
- Filter by Primary Language:
- Python (104)
- Jupyter Notebook (16)
- C++ (6)
- C (4)
- Go (1)
- HTML (1)
- JavaScript (1)
- Cuda (1)
- Kotlin (1)
- C# (1)
- Ruby (1)
- Rust (1)
- Tcl (1)
- VHDL (1)
- +
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Created
2023-05-28
2,377 commits to main branch, last one a day ago
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Created
2023-03-15
556 commits to main branch, last one 6 months ago
Faster Whisper transcription with CTranslate2
Created
2023-02-11
234 commits to master branch, last one 23 hours ago
[🔥updating ...] AI 自动量化交易机器人(完全本地部署) AI-powered Quantitative Investment Research Platform. 📃 online docs: https://ufund-me.github.io/Qbot ✨ :news: qbot-mini: https://github.com/Charmve/iQuant
Created
2022-11-23
143 commits to main branch, last one 11 days ago
Lossy PNG compressor — pngquant command based on libimagequant library
Created
2009-09-17
1,206 commits to main branch, last one 4 months ago
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Created
2023-04-13
761 commits to main branch, last one about a month ago
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
This repository has been archived
(exclude archived)
Created
2018-04-24
643 commits to master branch, last one about a year ago
Fast inference engine for Transformer models
Created
2019-09-23
2,186 commits to master branch, last one 2 days ago
Sparsity-aware deep learning inference runtime for CPUs
Created
2020-12-14
1,052 commits to main branch, last one 4 months ago
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Created
2019-12-02
162 commits to master branch, last one 10 months ago
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
This repository has been archived
(exclude archived)
Created
2018-05-17
957 commits to master branch, last one 2 years ago
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)
Created
2017-04-28
17 commits to master branch, last one 4 years ago
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJ...
Created
2023-03-19
593 commits to main branch, last one about a month ago
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Created
2021-07-20
1,129 commits to main branch, last one a day ago
Run Mixtral-8x7B models in Colab or consumer desktops
Created
2023-12-15
86 commits to master branch, last one 10 months ago
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Created
2020-07-21
3,619 commits to master branch, last one a day ago
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Ari...
bnn
twn
onnx
dorefa
pruning
pytorch
tensorrt
xnor-net
quantization
network-slimming
group-convolution
model-compression
network-in-network
tensorrt-int8-python
convolutional-networks
neuromorphic-computing
integer-arithmetic-only
batch-normalization-fuse
post-training-quantization
quantization-aware-training
Created
2019-12-04
295 commits to master branch, last one 3 years ago
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Created
2020-04-21
2,289 commits to develop branch, last one a day ago
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (pape...
Created
2018-10-18
299 commits to master branch, last one 19 days ago
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Created
2020-04-15
2,367 commits to main branch, last one 9 hours ago
PyTorch native quantization and sparsity for training and inference
Created
2023-11-03
776 commits to main branch, last one 13 hours ago
PaddleSlim is an open-source library for deep model compression and architecture search.
Created
2019-12-16
1,245 commits to develop branch, last one a day ago
PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.
Created
2021-12-30
291 commits to master branch, last one 8 months ago
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Created
2018-10-31
833 commits to master branch, last one 17 days ago
OpenMMLab Model Compression Toolbox and Benchmark.
Created
2021-12-22
229 commits to main branch, last one about a year ago
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Created
2023-03-30
421 commits to master branch, last one 3 months ago
Efficient computing methods developed by Huawei Noah's Ark Lab
Created
2019-09-04
157 commits to master branch, last one 15 days ago
Brevitas: neural network quantization in PyTorch
Created
2018-07-10
1,406 commits to master branch, last one about a month ago
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
Created
2023-09-12
71 commits to main branch, last one 19 days ago
Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™
automl
pytorch
datumaro
openvino
quantization
deep-learning
computer-vision
machine-learning
object-detection
anomaly-detection
transfer-learning
action-recognition
image-segmentation
image-classification
incremental-learning
self-supervised-learning
semi-supervised-learning
neural-networks-compression
hyper-parameter-optimization
Created
2018-10-26
1,007 commits to develop branch, last one a day ago