The Nebula Benchmark Suite: Implications of Lightweight Neural Networks

被引:4
|
作者
Kim, Bogil [1 ]
Lee, Sungjae [1 ]
Park, Chanho [1 ]
Kim, Hyeonjin [1 ]
Song, William J. [1 ]
机构
[1] Yonsei Univ, Sch Elect & Elect Engn, Seoul 120749, South Korea
关键词
Neural networks; Benchmark testing; Training; Libraries; Microarchitecture; Computational modeling; Acceleration; benchmarks; characterization; hardware measurement;
D O I
10.1109/TC.2020.3029327
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article presents a benchmark suite named Nebula that implements lightweight neural network benchmarks. Recent neural networks tend to form deeper and sizable networks to enhance accuracy and applicability. However, the massive volume of heavy networks makes them highly challenging to use in conventional research environments such as microarchitecture simulators. We notice that neural network computations are mainly comprised of matrix and vector calculations that repeat on multi-dimensional data encompassing batches, channels, layers, etc. This observation motivates us to develop a variable-sized neural network benchmark suite that provides users with options to select appropriate size of benchmarks for different research purposes or experiment conditions. Inspired by the implementations of well-known benchmarks such as PARSEC and SPLASH suites, Nebula offers various size options from large to small datasets for diverse types of neural networks. The Nebula benchmark suite is comprised of seven representative neural networks built on a C++ framework. The variable-sized benchmarks can be executed i) with acceleration libraries (e.g., BLAS, cuDNN) for faster and realistic application runs or ii) without the external libraries if execution environments do not support them, e.g., microarchitecture simulators. This article presents a methodology to develop the variable-sized neural network benchmarks, and their performance and characteristics are evaluated based on hardware measurements. The results demonstrate that the Nebula benchmarks reduce execution time as much as 25x while preserving similar architectural behaviors as the full-fledged neural networks.
引用
收藏
页码:1887 / 1900
页数:14
相关论文
共 50 条
  • [1] A Suite of IEEE 1687 Benchmark Networks
    Tsertov, Anton
    Jutman, Artur
    Devadze, Sergei
    Reorda, Matteo Sonza
    Larsson, Erik
    Zadegan, Farrokh Ghani
    Cantoro, Riccardo
    Montazeri, Mehrdad
    Krenz-Baath, Rene
    PROCEEDINGS 2016 IEEE INTERNATIONAL TEST CONFERENCE (ITC), 2016,
  • [2] The PARSEC Benchmark Suite: Characterization and Architectural Implications
    Bienia, Christian
    Kumar, Sanjeev
    Singh, Jaswinder Pal
    Li, Kai
    PACT'08: PROCEEDINGS OF THE SEVENTEENTH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, 2008, : 72 - 81
  • [3] DNNMark: A Deep Neural Network Benchmark Suite for GPUs
    Dong, Shi
    Kaeli, David
    PROCEEDINGS OF THE GENERAL PURPOSE GPUS (GPGPU-10), 2017, : 63 - 72
  • [4] A Benchmark Suite of Hardware Trojans for On-Chip Networks
    Wang, Jian
    Guo, Shize
    Chen, Zhe
    Zhang, Tao
    IEEE ACCESS, 2019, 7 : 102002 - 102009
  • [5] μSuite: A Benchmark Suite for Microservices
    Sriraman, Akshitha
    Wenisch, Thomas F.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON WORKLOAD CHARACTERIZATION (IISWC), 2018, : 1 - 12
  • [6] Architectural Implications in Graph Processing of Accelerator with Gardenia Benchmark Suite
    Zhang, Yang
    Shen, Jie
    Xu, Zhen
    Qiu, Shikai
    Chen, Xuhao
    2019 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2019), 2019, : 1329 - 1339
  • [7] A Continuous Optimisation Benchmark Suite from Neural Network Regression
    Malan, Katherine M.
    Cleghorn, Christopher W.
    PARALLEL PROBLEM SOLVING FROM NATURE - PPSN XVII, PPSN 2022, PT I, 2022, 13398 : 177 - 191
  • [8] The OARF Benchmark Suite: Characterization and Implications for Federated Learning Systems
    Hu, Sixu
    Li, Yuan
    Liu, Xu
    Li, Qinbin
    Wu, Zhaomin
    He, Bingsheng
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [9] Tango: A Deep Neural Network Benchmark Suite for Various Accelerators
    Karki, Aajna
    Keshava, Chethan Palangotu
    Shivakumar, Spoorthi Mysore
    Skow, Joshua
    Hegde, Goutam Madhukeshwar
    Jeon, Hyeran
    2019 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS), 2019, : 137 - 138
  • [10] A BENCHMARK CHARACTERIZATION OF THE EEMBC BENCHMARK SUITE
    Poovey, Jason A.
    Conte, Thomas M.
    Levy, Markus
    Gal-On, Shay
    IEEE MICRO, 2009, 29 (05) : 18 - 29