The Nebula Benchmark Suite: Implications of Lightweight Neural Networks

被引:4
|
作者
Kim, Bogil [1 ]
Lee, Sungjae [1 ]
Park, Chanho [1 ]
Kim, Hyeonjin [1 ]
Song, William J. [1 ]
机构
[1] Yonsei Univ, Sch Elect & Elect Engn, Seoul 120749, South Korea
关键词
Neural networks; Benchmark testing; Training; Libraries; Microarchitecture; Computational modeling; Acceleration; benchmarks; characterization; hardware measurement;
D O I
10.1109/TC.2020.3029327
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article presents a benchmark suite named Nebula that implements lightweight neural network benchmarks. Recent neural networks tend to form deeper and sizable networks to enhance accuracy and applicability. However, the massive volume of heavy networks makes them highly challenging to use in conventional research environments such as microarchitecture simulators. We notice that neural network computations are mainly comprised of matrix and vector calculations that repeat on multi-dimensional data encompassing batches, channels, layers, etc. This observation motivates us to develop a variable-sized neural network benchmark suite that provides users with options to select appropriate size of benchmarks for different research purposes or experiment conditions. Inspired by the implementations of well-known benchmarks such as PARSEC and SPLASH suites, Nebula offers various size options from large to small datasets for diverse types of neural networks. The Nebula benchmark suite is comprised of seven representative neural networks built on a C++ framework. The variable-sized benchmarks can be executed i) with acceleration libraries (e.g., BLAS, cuDNN) for faster and realistic application runs or ii) without the external libraries if execution environments do not support them, e.g., microarchitecture simulators. This article presents a methodology to develop the variable-sized neural network benchmarks, and their performance and characteristics are evaluated based on hardware measurements. The results demonstrate that the Nebula benchmarks reduce execution time as much as 25x while preserving similar architectural behaviors as the full-fledged neural networks.
引用
收藏
页码:1887 / 1900
页数:14
相关论文
共 50 条
  • [41] OPENEARTHMAP BENCHMARK SUITE AND ITS APPLICATIONS
    Yokoyama, Naoto
    Xia, Junshi
    Broni-Bediako, Clifford
    Song, Jian
    Chen, Hongruixuan
    IGARSS 2024-2024 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, IGARSS 2024, 2024, : 959 - 962
  • [42] ParVec: vectorizing the PARSEC benchmark suite
    Juan M. Cebrian
    Magnus Jahre
    Lasse Natvig
    Computing, 2015, 97 : 1077 - 1100
  • [43] Austin RCS Benchmark Suite Developments
    Kelley, Jon T.
    Courtney, Clifton
    Chamulak, David A.
    Yilmaz, Ali E.
    2019 USNC-URSI RADIO SCIENCE MEETING (JOINT WITH AP-S SYMPOSIUM), 2019, : 19 - 20
  • [44] SupermarQ: A Scalable Quantum Benchmark Suite
    Tomesh, Teague
    Gokhale, Pranav
    Omole, Victory
    Ravi, Gokul Subramanian
    Smith, Kaitlin N.
    Viszlai, Joshua
    Wu, Xin-Chuan
    Hardavellas, Nikos
    Martonosi, Margaret R.
    Chong, Frederic T.
    2022 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2022), 2022, : 587 - 603
  • [45] DIBS: A Data Integration Benchmark Suite
    Cabrera, Anthony M.
    Faber, Clayton J.
    Cepeda, Kyle
    Derber, Robert
    Epstein, Cooper
    Zheng, Jason
    Cytron, Ron K.
    Chamberlain, Roger D.
    COMPANION OF THE 2018 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE '18), 2018, : 25 - 28
  • [46] Rodinia: A Benchmark Suite for Heterogeneous Computing
    Che, Shuai
    Boyer, Michael
    Meng, Jiayuan
    Tarjan, David
    Sheaffer, Jeremy W.
    Lee, Sang-Ha
    Skadron, Kevin
    PROCEEDINGS OF THE 2009 IEEE INTERNATIONAL SYMPOSIUM ON WORKLOAD CHARACTERIZATION, 2009, : 44 - 54
  • [47] Perceptual hashing algorithms benchmark suite
    Schmucker Martin
    仪器仪表学报, 2007, (04) : 603 - 608
  • [48] Treelogy: A Benchmark Suite for Tree Traversals
    Hegde, Nikhil
    Liu, Jianqiao
    Sundararajah, Kirshanthan
    Kulkarni, Milind
    2017 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS), 2017, : 227 - 237
  • [49] Spector: An OpenCL FPGA Benchmark Suite
    Gautier, Quentin
    Althoff, Alric
    Meng, Pingfan
    Kastner, Ryan
    2016 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (FPT), 2016, : 141 - 148
  • [50] HyperBench: A Benchmark Suite for Virtualization Capabilities
    Wei S.
    Zhang K.
    Tu B.
    Performance Evaluation Review, 2019, 47 (01): : 73 - 74