共 50 条
- [41] Optimizing Energy Utilization of Flexible Deep Neural Network Accelerators via Cache Incorporation 2022 IEEE 19TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2022), 2022, : 681 - 686
- [42] Telepathic Headache: Mitigating Cache Side-Channel Attacks on Convolutional Neural Networks APPLIED CRYPTOGRAPHY AND NETWORK SECURITY (ACNS 2021), PT I, 2021, 12726 : 363 - 392
- [43] Dual Cache for Long Document Neural Coreference Resolution PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 15272 - 15285
- [44] Look-Up Table based Energy Efficient Processing in Cache Support for Neural Network Acceleration 2020 53RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO 2020), 2020, : 88 - 101
- [47] A Scalable System-on-Chip Acceleration for Deep Neural Networks IEEE ACCESS, 2021, 9 : 95412 - 95426
- [48] Training Acceleration for Deep Neural Networks: A Hybrid Parallelization Strategy 2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1165 - 1170
- [49] Acceleration Strategies for Speech Recognition based on Deep Neural Networks MECHATRONICS ENGINEERING, COMPUTING AND INFORMATION TECHNOLOGY, 2014, 556-562 : 5181 - 5185
- [50] Fully Learnable Group Convolution for Acceleration of Deep Neural Networks 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 9041 - 9050