Triangle Counting Accelerations: From Algorithm to In-Memory Computing Architecture

被引:13
|
作者
Wang, Xueyan [1 ]
Yang, Jianlei [2 ]
Zhao, Yinglin [1 ]
Jia, Xiaotao [1 ]
Yin, Rong [3 ]
Chen, Xuhang [1 ]
Qu, Gang [4 ,5 ]
Zhao, Weisheng [1 ]
机构
[1] Beihang Univ, Sch Integrated Circuit Sci & Engn, MIIT Key Lab Spintron, Beijing 100191, Peoples R China
[2] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Software Dev Environm NLSDE, BDBC, Beijing 100191, Peoples R China
[3] Chinese Acad Sci, Inst Informat Engn, Beijing 100049, Peoples R China
[4] Univ Maryland, Dept Elect & Comp Engn, College Pk, MD 20742 USA
[5] Univ Maryland, Inst Syst Res, College Pk, MD 20742 USA
基金
中国国家自然科学基金;
关键词
Triangle counting acceleration; processing-in-memory; algorithm-architecture co-design; graph computing; NONVOLATILE MEMORY; ENERGY;
D O I
10.1109/TC.2021.3131049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Triangles are the basic substructure of networks and triangle counting (TC) has been a fundamental graph computing problem in numerous fields such as social network analysis. Nevertheless, like other graph computing problems, due to the high memory-computation ratio and random memory access pattern, TC involves a large amount of data transfers thus suffers from the bandwidth bottleneck in the traditional Von-Neumann architecture. To overcome this challenge, in this paper, we propose to accelerate TC with the emerging processingin-memory (PIM) architecture through an algorithm-architecture co-optimization manner. To enable the efficient in-memory implementations, we come up to reformulate TC with bitwise logic operations (such as AND), and develop customized graph compression and mapping techniques for efficient data flow management. With the emerging computational Spin-Transfer Torque Magnetic RAM(STT-MRAM) array, which is one of the most promising PIM enabling techniques, the device-to-architecture co-simulation results demonstrate that the proposed TC in-memory accelerator outperforms the state-of-the-art GPU and FPGA accelerations by 12.2 x and 31.8 x, respectively, and achieves a 34 x energy efficiency improvement over the FPGA accelerator.
引用
收藏
页码:2462 / 2472
页数:11
相关论文
共 50 条
  • [21] An Efficient In-Memory Computing Architecture for Image Enhancement in AI Applications
    Bettayeb, Meriem
    Zayer, Fakhreddine
    Abunahla, Heba
    Gianini, Gabriele
    Mohammad, Baker
    IEEE ACCESS, 2022, 10 : 48229 - 48241
  • [22] An In-Memory-Computing Binary Neural Network Architecture With In-Memory Batch Normalization
    Rege, Prathamesh Prashant
    Yin, Ming
    Parihar, Sanjay
    Versaggi, Joseph
    Nemawarkar, Shashank
    IEEE ACCESS, 2024, 12 : 190889 - 190896
  • [23] In-memory computing with ferroelectrics
    Rui Yang
    Nature Electronics, 2020, 3 : 237 - 238
  • [24] Computing In-Memory, Revisited
    Milojicic, Dejan
    Bresniker, Kirk
    Campbell, Gary
    Faraboschi, Paolo
    Strachan, John Paul
    Williams, Stan
    2018 IEEE 38TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2018, : 1300 - 1309
  • [25] Hyperspectral In-Memory Computing
    Latifpour, Mostafa Honari
    Park, Byoung Jun
    Yamamoto, Yoshihisa
    Suh, Myoung-Gyun
    2024 OPTICAL FIBER COMMUNICATIONS CONFERENCE AND EXHIBITION, OFC, 2024,
  • [26] Scalable In-Memory Computing
    Uta, Alexandru
    Sandu, Andreea
    Costache, Stefania
    Kielmann, Thilo
    2015 15TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING, 2015, : 805 - 810
  • [27] In-memory mechanical computing
    Tie Mei
    Chang Qing Chen
    Nature Communications, 14
  • [28] In-memory hyperdimensional computing
    Karunaratne, Geethan
    Le Gallo, Manuel
    Cherubini, Giovanni
    Benini, Luca
    Rahimi, Abbas
    Sebastian, Abu
    NATURE ELECTRONICS, 2020, 3 (06) : 327 - +
  • [29] In-memory mechanical computing
    Mei, Tie
    Chen, Chang Qing
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [30] In-memory hyperdimensional computing
    Geethan Karunaratne
    Manuel Le Gallo
    Giovanni Cherubini
    Luca Benini
    Abbas Rahimi
    Abu Sebastian
    Nature Electronics, 2020, 3 : 327 - 337