A Reconfigurable Pipelined Architecture for Convolutional Neural Network Acceleration

被引:2
|
作者
Xue, Chengbo [1 ]
Cao, Shan [2 ]
Jiang, Rongkun [1 ]
Yang, Hao [1 ]
机构
[1] Beijing Inst Technol, Sch Informat & Elect, Beijing 100081, Peoples R China
[2] Shanghai Univ, Shanghai Inst Adv Commun & Data Sci, Joint Int Res Lab Specialty Fiber Opt & Adv Commu, Key Lab Specialty Fiber Opt & Opt Access Networks, Shanghai, Peoples R China
关键词
Convolutional neural network; inter-layer pipeline; hardware accelerator; machine learning;
D O I
10.1109/ISCAS.2018.8351425
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The convolutional neural network (CNN) has become widely used in a variety of vision recognition applications, and the hardware acceleration of CNN is in urgent need as increasingly more computations are required in the state-of-the-art CNN networks. In this paper, we propose a pipelined architecture for CNN acceleration. The probability of both inner-layer and inter-layer pipeline for typical CNN networks is analyzed. And two types of data re-ordering methods, the filter-first (FF) flow and the image-first (IF) flow, are proposed for different kinds of layers. Then, a pipelined CNN accelerator for AlexNet is implemented, the dataflow of which can be reconfigurably selected for different layer processing. Simulation results show that the proposed pipelined architecture achieves 43% performance improvement compared with the non-pipelined ones. The AlexNet accelerator is implemented in 65 nm CMOS technology working at 200 MHz, with 350 mW power consumption and 24 GFLOPS peak performance.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Fully Pipelined FPGA Acceleration of Binary Convolutional Neural Networks with Neural Architecture Search
    Ji, Mengfei
    Al-Ars, Zaid
    Chang, Yuchun
    Zhang, Baolin
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2024, 33 (10)
  • [2] A Reconfigurable Process Engine for Flexible Convolutional Neural Network Acceleration
    Chen, Xiaobai
    Xiao, Shanlin
    Yu, Zhiyi
    2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2018, : 1402 - 1405
  • [3] Deep Convolutional Neural Network Architecture With Reconfigurable Computation Patterns
    Tu, Fengbin
    Yin, Shouyi
    Ouyang, Peng
    Tang, Shibin
    Liu, Leibo
    Wei, Shaojun
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 25 (08) : 2220 - 2233
  • [4] High Performance Kernel Architecture for Convolutional Neural Network Acceleration
    Hazarika, Anakhi
    Poddar, Soumyajit
    Rahaman, Hafizur
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (15)
  • [5] A High Performance Reconfigurable Hardware Architecture for Lightweight Convolutional Neural Network
    An, Fubang
    Wang, Lingli
    Zhou, Xuegong
    ELECTRONICS, 2023, 12 (13)
  • [6] Highly pipelined Accelerator for Convolutional Neural Network
    Kim, Junkyung
    Bae, HwangSik
    Min, Kyeong Yuk
    Chong, Jongwha
    2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2019,
  • [7] RNA: A Reconfigurable Architecture for Hardware Neural Acceleration
    Tu, Fengbin
    Yin, Shouyi
    Ouyang, Peng
    Liu, Leibo
    Wei, Shaojun
    2015 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2015, : 695 - 700
  • [8] A Fully Pipelined Hardware Architecture for Convolutional Neural Network with Low Memory Usage and DRAM Bandwidth
    Li, Zhiwei
    Li, Yan
    Chen, Song
    Wu, Feng
    2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 237 - 240
  • [9] Reconfigurable Neuromorphic Neural Network Architecture
    Sharma, Kapil
    Sarangi, Pradeepta Kumar
    Sharma, Parth
    Nayak, Soumya Ranjan
    Aluvala, Srinivas
    Swain, Santosh Kumar
    APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING, 2024, 2024
  • [10] Acceleration of Sparse Convolutional Neural Network Based on Coarse-Grained Dataflow Architecture
    Wu X.
    Ou Y.
    Li W.
    Wang D.
    Zhang H.
    Fan D.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (07): : 1504 - 1517