Efficient Implementation of Activation Functions for LSTM accelerators

被引:5
|
作者
Chong, Yi Sheng [1 ,2 ]
Goh, Wang Ling [1 ]
Ong, Yew Soon [3 ]
Nambiar, Vishnu P. [4 ]
Anh Tuan Do [4 ]
机构
[1] Nanyang Technol Univ NTU, Sch Elect & Elect Engn, Singapore, Singapore
[2] Nanyang Technol Univ, Interdisciplinary Grad Programme, Energy Res Inst, Singapore, Singapore
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[4] ASTAR, Inst Microelect, Singapore, Singapore
关键词
DESIGN;
D O I
10.1109/VLSI-SoC53125.2021.9606971
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Activation functions such as hyperbolic tangent (tanh) and logistic sigmoid (sigmoid) are critical computing elements in a long short term memory (LSTM) cell and network. These activation functions are non-linear, leading to challenges in their hardware implementations. Area-efficient and high performance hardware implementation of these activation functions thus becomes crucial to allow high throughput in a LSTM accelerator. In this work, we propose an approximation scheme which is suitable for both tanh and sigmoid functions. The proposed hardware for sigmoid function is 8.3 times smaller than the state-of-the-art, while for tanh function, it is the second smallest design. When applying the approximated tanh and sigmoid of 2% error in a LSTM cell computation, its final hidden state and cell state record errors of 3.1% and 5.8% respectively. When the same approximated functions are applied to a single layer LSTM network of 64 hidden nodes, the accuracy drops by 2.8% only. This proposed small yet accurate activation function hardware is promising to be used in Internet of Things (IoT) applications where accuracy can be traded off for ultra-low power consumption.
引用
收藏
页码:19 / 23
页数:5
相关论文
共 50 条
  • [1] Resource efficient activation functions for neural network accelerators
    Wuraola, Adedamola
    Patel, Nitish
    NEUROCOMPUTING, 2022, 482 : 163 - 185
  • [2] Implementation of Elementary Functions for FPGA Compute Accelerators
    Gilliland, Spenser
    Saniie, Jafar
    Vallina, Fernando Martinez
    2016 IEEE INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY (EIT), 2016, : 179 - 182
  • [3] SOMALib : Library of Exact and Approximate Activation Functions for Hardware-efficient Neural Network Accelerators
    Prashanth, H. C.
    Rao, Madhav
    2022 IEEE 40TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2022), 2022, : 746 - 753
  • [4] Energy Efficient LSTM Accelerators for Embedded FPGAs Through Parameterised Architecture Design
    Qian, Chao
    Ling, Tianheng
    Schiele, Gregor
    ARCHITECTURE OF COMPUTING SYSTEMS, ARCS 2023, 2023, 13949 : 3 - 17
  • [5] Implementation of the SoftMax Activation for Reconfigurable Neural Network Hardware Accelerators
    Shatravin, Vladislav
    Shashev, Dmitriy
    Shidlovskiy, Stanislav
    APPLIED SCIENCES-BASEL, 2023, 13 (23):
  • [6] An Efficient Hardware Implementation of Activation Functions Using Stochastic Computing for Deep Neural Networks
    Van-Tinh Nguyen
    Tieu-Khanh Luong
    Han Le Duc
    Van-Phuc Hoang
    2018 IEEE 12TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP (MCSOC 2018), 2018, : 233 - 236
  • [7] Concurrent Error Detection for LSTM Accelerators
    Nosrati, Nooshin
    Ghasemi, Seyedeh Maryam
    Roodsari, Mahboobe Sadeghipour
    Navabi, Zainalabedin
    2022 IEEE EUROPEAN TEST SYMPOSIUM (ETS 2022), 2022,
  • [8] Hardware Implementation for Multiple Activation Functions
    Chang, Chih-Hsiang
    Kao, Hsu-Yu
    Huang, Shih-Hsu
    2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2019,
  • [9] Verification of LSTM Neural Networks with Non-linear Activation Functions
    Moradkhani, Farzaneh
    Fibich, Connor
    Franzle, Martin
    NASA FORMAL METHODS, NFM 2023, 2023, 13903 : 1 - 15
  • [10] Developing Novel Activation Functions Based Deep Learning LSTM for Classification
    Ali, Mohamed H. Essai
    Abdel-Raman, Adel B.
    Badry, Eman A.
    IEEE ACCESS, 2022, 10 : 97259 - 97275