共 50 条
- [42] BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights SIAM JOURNAL ON IMAGING SCIENCES, 2018, 11 (04): : 2205 - 2223
- [43] FLightNNs: Lightweight Quantized Deep Neural Networks for Fast and Accurate Inference PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
- [46] Quantized Guided Pruning for Efficient Hardware Implementations of Deep Neural Networks 2020 18TH IEEE INTERNATIONAL NEW CIRCUITS AND SYSTEMS CONFERENCE (NEWCAS'20), 2020, : 206 - 209
- [48] Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 393 - 404
- [49] OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7780 - 7788
- [50] COMPRESSING DEEP NEURAL NETWORKS USING TOEPLITZ MATRIX: ALGORITHM DESIGN AND FPGA IMPLEMENTATION 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1443 - 1447