共 50 条
- [21] EDDIS: Accelerating Distributed Data -Parallel DNN Training for Heterogeneous GPU Cluster 2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IPDPSW 2024, 2024, : 1167 - 1168
- [22] SAVE: Sparsity-Aware Vector Engine for Accelerating DNN Training and Inference on CPUs 2020 53RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO 2020), 2020, : 796 - 810
- [24] A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters PROCEEDINGS OF THE 14TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI '20), 2020, : 463 - 479
- [25] Artifact: MASA: Responsive Multi-DNN Inference on the Edge 2021 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS (PERCOM WORKSHOPS), 2021, : 446 - 447
- [26] Heterogeneous Dataflow Accelerators for Multi-DNN Workloads 2021 27TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2021), 2021, : 71 - 83
- [27] Input Feature Pruning for Accelerating GNN Inference on Heterogeneous Platforms 2022 IEEE 29TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS, HIPC, 2022, : 282 - 291
- [29] Aries: A DNN Inference Scheduling Framework for Multi-core Accelerators 2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 186 - 191
- [30] Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs PROCEEDINGS OF THE 2024 THE 22ND ANNUAL INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS, APPLICATIONS AND SERVICES, MOBISYS 2024, 2024, : 465 - 478