The Limits of SEMA on Distinguishing Similar Activation Functions of Embedded Deep Neural Networks

被引:3
|
作者
Takatoi, Go [1 ]
Sugawara, Takeshi [1 ]
Sakiyama, Kazuo [1 ]
Hara-Azumi, Yuko [2 ]
Li, Yang [1 ]
机构
[1] Univ Electrocommun, Dept Informat, 1-5-1 Chofugaoka, Chofu, Tokyo 1828585, Japan
[2] Tokyo Inst Technol, Dept Informat & Commun Engn, Meguro Ku, 2-12-1 Ookayama, Tokyo 1528550, Japan
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 09期
关键词
machine learning; deep learning; side-channel; activation function; SEMA;
D O I
10.3390/app12094135
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Artificial intelligence (AI) is progressing rapidly, and in this trend, edge AI has been researched intensively. However, much less work has been performed around the security of edge AI. Machine learning models are a mass of intellectual property, and an optimized network is very valuable. Trained machine learning models need to be black boxes as well because they may give away information about the training data to the outside world. As selecting the appropriate activation functions to enable fast training of accurate deep neural networks is an active area of research, it is important to conceal the information of the activation functions used in a neural network architecture as well. There has been research on the use of physical attacks such as the side-channel attack (SCA) in areas other than cryptography. The SCA is highly effective against edge artificial intelligence due to its property of the device computing close to the user. We studied a previously proposed method to retrieve the activation functions of a black box neural network implemented on an edge device by using simple electromagnetic analysis (SEMA) and improved the signal processing procedure for further noisy measurements. The SEMA attack identifies activation functions by directly observing distinctive electromagnetic (EM) traces that correspond to the operations in the activation function. This method requires few executions and inputs and also has little implementation dependency on the activation functions. We distinguished eight similar activation functions with EM measurements and examined the versatility and limits of this attack. In this work, the machine learning architecture is a multilayer perceptron, evaluated on an Arduino Uno.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
    Jagtap, Ameya D.
    Kawaguchi, Kenji
    Karniadakis, George Em
    JOURNAL OF COMPUTATIONAL PHYSICS, 2020, 404 (404)
  • [32] Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic
    Cococcioni, Marco
    Rossi, Federico
    Ruffaldi, Emanuele
    Saponara, Sergio
    SENSORS, 2020, 20 (05)
  • [33] Embedded Streaming Deep Neural Networks Accelerator With Applications
    Dundar, Aysegul
    Jin, Jonghoon
    Martini, Berin
    Culurciello, Eugenio
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (07) : 1572 - 1583
  • [34] On the approximation of rough functions with deep neural networks
    De Ryck T.
    Mishra S.
    Ray D.
    SeMA Journal, 2022, 79 (3) : 399 - 440
  • [35] Quantum activation functions for quantum neural networks
    Marco Maronese
    Claudio Destri
    Enrico Prati
    Quantum Information Processing, 21
  • [36] HOLDER CONTINUOUS ACTIVATION FUNCTIONS IN NEURAL NETWORKS
    Tatar, Nasser-Eddine
    ADVANCES IN DIFFERENTIAL EQUATIONS AND CONTROL PROCESSES, 2015, 15 (02): : 93 - 106
  • [37] A Comparison of Activation Functions in Artificial Neural Networks
    Bircanoglu, Cenk
    Arica, Nafiz
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [38] Deep Convolutional Neural Networks on Cartoon Functions
    Grohs, Philipp
    Wiatowski, Thomas
    Bolcskei, Helmut
    2016 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2016, : 1163 - 1167
  • [39] Stochastic Neural Networks with Monotonic Activation Functions
    Ravanbakhsh, Siamak
    Poczos, Barnabas
    Schneider, Jeff
    Schuurmans, Dale
    Greiner, Russell
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 809 - 818
  • [40] Quantum activation functions for quantum neural networks
    Maronese, Marco
    Destri, Claudio
    Prati, Enrico
    QUANTUM INFORMATION PROCESSING, 2022, 21 (04)