共 29 条
[11]
JAIN S R, GURAL A, WU M, Et al., Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks
[12]
ZHOU A J, YAO A B, GUO Y W, Et al., Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
[13]
ZHANG D Q, YANG J L, YE D Q Z, Et al., LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks, Proc of the European Conference on Computer Vision, pp. 373-390, (2018)
[14]
GAO M Y, SHEN Y J, LI Q Q, Et al., Residual Knowledge Distillation
[15]
NOWAK T S, CORSO J J., Deep Net Triage: Analyzing the Importance of Network Layers via Structural Compression
[16]
POLINO A, PASCANU R, ALISTARH D., Model Compression via Distillation and Quantization
[17]
WEI Y, PAN X Y, QIN H W, Et al., Quantization Mimic: Towards Very Tiny CNN for Object Detection, Proc of the European Conference on Computer Vision, pp. 274-290, (2018)
[18]
MISHRA A, MARR D., Apprentice: Using Knowledge Distillation Techniques to Improve Low-Precision Network Accuracy
[19]
BENGIO Y, LEONARD N, COURVILLE A., Estimating or Propagating Gradients through Stochastic Neurons for Conditional Computation
[20]
KRIZHEVSKY A., Learning Multiple Layers of Features from Tiny Images