105 results found Sort:
- Filter by Primary Language:
- Python (77)
- Jupyter Notebook (16)
- HTML (1)
- C++ (1)
- Shell (1)
- JavaScript (1)
- Go (1)
- +
《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
Created
2019-07-02
582 commits to master branch, last one 9 days ago
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
This repository has been archived
(exclude archived)
Created
2018-04-24
643 commits to master branch, last one about a year ago
Sparsity-aware deep learning inference runtime for CPUs
Created
2020-12-14
1,052 commits to main branch, last one 4 months ago
[CVPR 2023] DepGraph: Towards Any Structural Pruning
Created
2019-12-15
1,442 commits to master branch, last one a day ago
A curated list of neural network pruning resources.
Created
2019-05-30
61 commits to master branch, last one 7 months ago
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Created
2020-07-21
3,619 commits to master branch, last one a day ago
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Ari...
bnn
twn
onnx
dorefa
pruning
pytorch
tensorrt
xnor-net
quantization
network-slimming
group-convolution
model-compression
network-in-network
tensorrt-int8-python
convolutional-networks
neuromorphic-computing
integer-arithmetic-only
batch-normalization-fuse
post-training-quantization
quantization-aware-training
Created
2019-12-04
295 commits to master branch, last one 3 years ago
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Created
2020-04-21
2,289 commits to develop branch, last one a day ago
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Created
2020-12-11
1,826 commits to main branch, last one 4 months ago
PaddleSlim is an open-source library for deep model compression and architecture search.
Created
2019-12-16
1,245 commits to develop branch, last one a day ago
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Created
2018-10-31
833 commits to master branch, last one 17 days ago
OpenMMLab Model Compression Toolbox and Benchmark.
Created
2021-12-22
229 commits to main branch, last one about a year ago
Config driven, easy backup cli for restic.
Created
2019-06-20
541 commits to master branch, last one 6 days ago
Efficient computing methods developed by Huawei Noah's Ark Lab
Created
2019-09-04
157 commits to master branch, last one 15 days ago
Neural Network Compression Framework for enhanced OpenVINO™ inference
Created
2020-05-13
2,293 commits to develop branch, last one 21 hours ago
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
Created
2023-05-17
165 commits to main branch, last one about a month ago
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
Created
2017-06-23
6 commits to master branch, last one 5 years ago
mobilev2-yolov5s剪枝、蒸馏,支持ncnn,tensorRT部署。ultra-light but better performence!
Created
2020-09-07
17 commits to master branch, last one 3 years ago
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
Created
2021-11-02
816 commits to main branch, last one 28 days ago
Embedded and mobile deep learning research resources
Created
2017-06-06
60 commits to master branch, last one about a year ago
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Created
2020-05-10
298 commits to master branch, last one about a year ago
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
Created
2019-03-26
19 commits to master branch, last one about a year ago
[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
Created
2023-10-16
71 commits to main branch, last one 8 months ago
Awesome machine learning model compression research papers, quantization, tools, and learning material.
Created
2018-12-06
41 commits to master branch, last one 2 months ago
YOLO ModelCompression MultidatasetTraining
Created
2019-12-24
438 commits to master branch, last one 2 years ago
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Created
2022-05-25
942 commits to main branch, last one 20 hours ago
Pruning and other network surgery for trained Keras models.
Created
2017-08-22
80 commits to master branch, last one 3 years ago
A PyTorch-based model pruning toolkit for pre-trained language models
Created
2021-07-20
42 commits to main branch, last one about a year ago
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Created
2020-12-11
301 commits to main branch, last one 4 months ago
A model compression and acceleration toolbox based on pytorch.
Created
2022-07-21
134 commits to main branch, last one about a year ago