共 50 条
- [1] Memory-Efficient Dataflow Inference for Deep CNNs on FPGA 2020 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (ICFPT 2020), 2020, : 48 - 55
- [2] Occamy: Memory-efficient GPU Compiler for DNN Inference 2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
- [3] TETRIS: Memory-efficient Serverless Inference through Tensor Sharing PROCEEDINGS OF THE 2022 USENIX ANNUAL TECHNICAL CONFERENCE, 2022, : 473 - 488
- [4] Memory-Efficient Deep Learning Inference in Trusted Execution Environments 2021 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING, IC2E 2021, 2021, : 161 - 167
- [5] StreamNet: Memory-Efficient Streaming Tiny Deep Learning Inference on the Microcontroller ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [6] Performance Trade-offs in Weight Quantization for Memory-Efficient Inference 2019 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2019), 2019, : 246 - 250
- [7] Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA GECCO'20: PROCEEDINGS OF THE 2020 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2020, : 1125 - 1133
- [8] A Memory-Efficient Edge Inference Accelerator with XOR-based Model Compression 2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
- [9] PENETRALIUM: Privacy-preserving and memory-efficient neural network inference at the edge FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 156 : 30 - 41
- [10] Buffer Sizes Reduction for Memory-efficient CNN Inference on Mobile and Embedded Devices 2020 23RD EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD 2020), 2020, : 133 - 140