1 result found Sort:
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Ari...
bnn
twn
onnx
dorefa
pruning
pytorch
tensorrt
xnor-net
quantization
network-slimming
group-convolution
model-compression
network-in-network
tensorrt-int8-python
convolutional-networks
neuromorphic-computing
integer-arithmetic-only
batch-normalization-fuse
post-training-quantization
quantization-aware-training
Created
2019-12-04
295 commits to master branch, last one 3 years ago