The Nebula Benchmark Suite: Implications of Lightweight Neural Networks

被引:4
|
作者
Kim, Bogil [1 ]
Lee, Sungjae [1 ]
Park, Chanho [1 ]
Kim, Hyeonjin [1 ]
Song, William J. [1 ]
机构
[1] Yonsei Univ, Sch Elect & Elect Engn, Seoul 120749, South Korea
关键词
Neural networks; Benchmark testing; Training; Libraries; Microarchitecture; Computational modeling; Acceleration; benchmarks; characterization; hardware measurement;
D O I
10.1109/TC.2020.3029327
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article presents a benchmark suite named Nebula that implements lightweight neural network benchmarks. Recent neural networks tend to form deeper and sizable networks to enhance accuracy and applicability. However, the massive volume of heavy networks makes them highly challenging to use in conventional research environments such as microarchitecture simulators. We notice that neural network computations are mainly comprised of matrix and vector calculations that repeat on multi-dimensional data encompassing batches, channels, layers, etc. This observation motivates us to develop a variable-sized neural network benchmark suite that provides users with options to select appropriate size of benchmarks for different research purposes or experiment conditions. Inspired by the implementations of well-known benchmarks such as PARSEC and SPLASH suites, Nebula offers various size options from large to small datasets for diverse types of neural networks. The Nebula benchmark suite is comprised of seven representative neural networks built on a C++ framework. The variable-sized benchmarks can be executed i) with acceleration libraries (e.g., BLAS, cuDNN) for faster and realistic application runs or ii) without the external libraries if execution environments do not support them, e.g., microarchitecture simulators. This article presents a methodology to develop the variable-sized neural network benchmarks, and their performance and characteristics are evaluated based on hardware measurements. The results demonstrate that the Nebula benchmarks reduce execution time as much as 25x while preserving similar architectural behaviors as the full-fledged neural networks.
引用
收藏
页码:1887 / 1900
页数:14
相关论文
共 50 条
  • [31] Modulation Recognition Based on Lightweight Neural Networks
    Wang, Tongyue
    Jin, Yanhua
    2020 13TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2020), 2020, : 468 - 472
  • [32] FasterAI: A Lightweight Library for Neural Networks Compression
    Hubens, Nathan
    Mancas, Matei
    Gosselin, Bernard
    Preda, Marius
    Zaharia, Titus
    ELECTRONICS, 2022, 11 (22)
  • [33] Review of research on lightweight convolutional neural networks
    Zhou, Yan
    Chen, Shaochang
    Wang, Yiming
    Huan, Wenming
    PROCEEDINGS OF 2020 IEEE 5TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2020), 2020, : 1713 - 1720
  • [34] Lightweight Binary Neural Networks for Edge Devices
    Kang, Hyunkyu
    Kim, Seokhoon
    Moon, Sanghyeok
    Kim, Youngmin
    2024 IEEE THE 20TH ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS 2024, 2024, : 781 - 784
  • [35] LIGHTWEIGHT NEURAL NETWORKS FROM PCA & LDA BASED DISTILLED DENSE NEURAL NETWORKS
    Seddik, Mohamed El Amine
    Essafi, Hassane
    Benzine, Abdallah
    Tamaazousti, Mohamed
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 3060 - 3064
  • [36] A review and benchmark of feature importance methods for neural networks
    Mandler, Hannes
    Weigand, Bernhard
    ACM COMPUTING SURVEYS, 2024, 56 (12)
  • [37] A Defense-Inspired Benchmark Suite
    Ehrett, Pete
    Block, Nathan
    Schaefer, Bing
    Berding, Adrian
    Koenig, John Paul
    Srinivasan, Pranav
    Bertacco, Valeria
    Austin, Todd
    2021 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS 2021), 2021, : 79 - 80
  • [38] A benchmark suite for unstructured data processing
    Smullen, Clinton Wills
    Tarapore, Shahrukh Rohinton
    Gurumurthi, Sudhanva
    SNAPI 2007: FOURTH INTERNATIONAL WORKSHOP ON STORAGE NETWORK ARCHITECTURE AND PARALLEL I/OS, PROCEEDINGS, 2007, : 79 - 83
  • [39] General Program Synthesis Benchmark Suite
    Helmuth, Thomas
    Spector, Lee
    GECCO'15: PROCEEDINGS OF THE 2015 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2015, : 1039 - 1046
  • [40] BioBench: A benchmark suite of bioinformatics applications
    Albayraktaroglu, K
    Jaleel, A
    Wu, X
    Franklin, M
    Jacob, B
    Tseng, CW
    Yeung, D
    ISPASS 2005: IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE, 2005, : 2 - 9