共 50 条
- [41] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2018, : 1163 - 1166
- [43] A compression-based memory-efficient optimization for out-of-core GPU stencil computation The Journal of Supercomputing, 2023, 79 : 11055 - 11077
- [44] GMLake: Efficient and Transparent GPU Memory Defragmentation for Large-scale DNN Training with Virtual Memory Stitching PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS, ASPLOS 2024, VOL 2, 2024, : 450 - 466
- [45] From GPU to FPGA: A Pipelined Hierarchical Approach to Fast and Memory-efficient NDN Name Lookup 2014 IEEE 22ND ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM 2014), 2014, : 106 - 106
- [47] Memory-Efficient Adaptive Optimization ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [48] A compression-based memory-efficient optimization for out-of-core GPU stencil computation JOURNAL OF SUPERCOMPUTING, 2023, 79 (10): : 11055 - 11077
- [49] Sparse and Robust RRAM-based Efficient In-memory Computing for DNN Inference 2022 IEEE INTERNATIONAL RELIABILITY PHYSICS SYMPOSIUM (IRPS), 2022,