Parallel Approaches to Accelerate Deep Learning Processes Using Heterogeneous Computing

被引:0
|
作者
Nasimov, Rashid [1 ]
Rakhimov, Mekhriddin [2 ]
Javliev, Shakhzod [2 ]
Abdullaeva, Malika [2 ]
机构
[1] Tashkent State Univ Econ, Tashkent, Uzbekistan
[2] Tashkent Univ Informat Technol, Tashkent, Uzbekistan
关键词
artificial intelligence; deep learning; heterogeneous computing systems; OpenCL; CUDA technology; parallel processing;
D O I
10.1007/978-3-031-60997-8_4
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In the current context, the rise of artificial intelligence (AI) emphasizes the need to expedite training procedures, especially when dealing with extensive data, particularly in deep learning. This research primarily aims to significantly improve the time efficiency of deep learning processes. While it's widely recognized that graphics processing units (GPUs) offer notably faster performance for specific data tasks compared to a computer's central processing unit (CPU), this study explores heterogeneous computing systems for situations where GPUs are unavailable. Here, we investigate strategies to achieve enhanced processing speed using advanced technologies. The study concludes by presenting comparative results from various approaches and providing important recommendations for future endeavors.
引用
收藏
页码:32 / 41
页数:10
相关论文
共 50 条
  • [21] HETEROGENEOUS PARALLEL COMPUTING USING CUDA FOR CHEMICAL PROCESS
    Sosutha, S.
    Mohana, D.
    GRAPH ALGORITHMS, HIGH PERFORMANCE IMPLEMENTATIONS AND ITS APPLICATIONS (ICGHIA 2014), 2015, 47 : 237 - 246
  • [22] Practical Approaches Based on Deep Learning and Social Computing
    Park, Jong Hyuk
    JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2018, 14 (01): : 1 - 5
  • [23] Accelerate Model Parallel Deep Learning Training Using Effective Graph Traversal Order in Device Placement
    Wang, Tianze
    Payberah, Amir H.
    Hagos, Desta Haileselassie
    Vlassov, Vladimir
    DISTRIBUTED APPLICATIONS AND INTEROPERABLE SYSTEMS (DAIS 2022), 2022, 13272 : 114 - 130
  • [24] Dynamic service provisioning in heterogeneous fog computing architecture using deep reinforcement learning
    Govarchinghaleh, Yaghoub Alizadeh
    Sabaei, Masoud
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (16): : 23867 - 23910
  • [25] A Deep Learning Based Fault Diagnosis Method With Hyperparameter Optimization by Using Parallel Computing
    Guo, Chaozhong
    Li, Lin
    Hu, Yuanyuan
    Yan, Jihong
    IEEE ACCESS, 2020, 8 : 131248 - 131256
  • [26] LEARNING FROM HETEROGENEOUS DATA WITH DEEP GAUSSIAN PROCESSES
    Ajirak, Marzieh
    Preis, Heidi
    Lobel, Marci
    Djuric, Petar M.
    2023 IEEE 9TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING, CAMSAP, 2023, : 46 - 50
  • [27] Distributed Deep Learning With GPU-FPGA Heterogeneous Computing
    Tanaka, Kenji
    Arikawa, Yuki
    Ito, Tsuyoshi
    Morita, Kazutaka
    Nemoto, Naru
    Terada, Kazuhiko
    Teramoto, Junji
    Sakamoto, Takeshi
    IEEE MICRO, 2021, 41 (01) : 15 - 22
  • [28] Quantitative evaluation of deep learning frameworks in heterogeneous computing environment
    Zhengxian Lu
    Chengkun Du
    Yanfeng Jiang
    Xueshuo Xie
    Tao Li
    Fei Yang
    CCF Transactions on High Performance Computing, 2024, 6 : 94 - 111
  • [29] Heterogeneous parallel and distributed computing
    Sunderam, VS
    Geist, GA
    PARALLEL COMPUTING, 1999, 25 (13-14) : 1699 - 1721
  • [30] Quantitative evaluation of deep learning frameworks in heterogeneous computing environment
    Lu, Zhengxian
    Du, Chengkun
    Jiang, Yanfeng
    Xie, Xueshuo
    Li, Tao
    Yang, Fei
    CCF TRANSACTIONS ON HIGH PERFORMANCE COMPUTING, 2024, 6 (01) : 94 - 111