AMITE: A Novel Polynomial Expansion for Analyzing Neural Network Nonlinearities

被引:1
|
作者
Sanchirico, Mauro J. [1 ,2 ]
Jiao, Xun [2 ]
Nataraj, C. [3 ]
机构
[1] Lockheed Martin Artificial Intelligence Ctr, Mt Laurel Township, NJ 08054 USA
[2] Villanova Univ, Dept Elect & Comp Engn, Villanova, PA 19085 USA
[3] Villanova Univ, Villanova Ctr Analyt Dynam Syst, Villanova, PA 19085 USA
关键词
Neural networks; Taylor series; Chebyshev approximation; Transforms; Convergence; Learning systems; Kernel; Approximation; equivalence; Fourier; neural networks; polynomial; Taylor; HARDWARE IMPLEMENTATION; ACTIVATION FUNCTIONS; CONVERGENCE;
D O I
10.1109/TNNLS.2021.3130904
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Polynomial expansions are important in the analysis of neural network nonlinearities. They have been applied thereto addressing well-known difficulties in verification, explainability, and security. Existing approaches span classical Taylor and Chebyshev methods, asymptotics, and many numerical approaches. We find that, while these have useful properties individually, such as exact error formulas, adjustable domain, and robustness to undefined derivatives, there are no approaches that provide a consistent method, yielding an expansion with all these properties. To address this, we develop an analytically modified integral transform expansion (AMITE), a novel expansion via integral transforms modified using derived criteria for convergence. We show the general expansion and then demonstrate an application for two popular activation functions: hyperbolic tangent and rectified linear units. Compared with existing expansions (i.e., Chebyshev, Taylor, and numerical) employed to this end, AMITE is the first to provide six previously mutually exclusive desired expansion properties, such as exact formulas for the coefficients and exact expansion errors. We demonstrate the effectiveness of AMITE in two case studies. First, a multivariate polynomial form is efficiently extracted from a single hidden layer black-box multilayer perceptron (MLP) to facilitate equivalence testing from noisy stimulus-response pairs. Second, a variety of feedforward neural network (FFNN) architectures having between three and seven layers are range bounded using Taylor models improved by the AMITE polynomials and error formulas. AMITE presents a new dimension of expansion methods suitable for the analysis/approximation of nonlinearities in neural networks, opening new directions and opportunities for the theoretical analysis and systematic testing of neural networks.
引用
收藏
页码:5732 / 5744
页数:13
相关论文
共 50 条
  • [41] Firefly based Ridge Polynomial Neural Network for Classification
    Behera, N. K. S.
    Behera, H. S.
    2014 INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION CONTROL AND COMPUTING TECHNOLOGIES (ICACCCT), 2014, : 1110 - 1113
  • [42] Polynomial Time Cryptanalytic Extraction of Neural Network Models
    Canales-Martinez, Isaac A.
    Chavez-Saab, Jorge
    Hambitzer, Anna
    Rodriguez-Henriquez, Francisco
    Satpute, Nitin
    Shamir, Adi
    ADVANCES IN CRYPTOLOGY, PT III, EUROCRYPT 2024, 2024, 14653 : 3 - 33
  • [44] GA based Polynomial Neural Network for Data Classification
    Nayak, Janmenjoy
    Sahoo, N.
    Swain, J. R.
    Dash, Tirtharaj
    Behera, H. S.
    2014 INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY (ICIT), 2014, : 234 - 239
  • [45] Advanced self-organizing polynomial neural network
    Kim, Dongwon
    Park, Gwi-Tae
    NEURAL COMPUTING & APPLICATIONS, 2007, 16 (4-5): : 443 - 452
  • [46] The Fractional Differential Polynomial Neural Network for Approximation of Functions
    Ibrahim, Rabha W.
    ENTROPY, 2013, 15 (10) : 4188 - 4198
  • [47] Prediction of plasma etching using a polynomial neural network
    Kim, B
    Kim, DW
    Park, GT
    IEEE TRANSACTIONS ON PLASMA SCIENCE, 2003, 31 (06) : 1330 - 1336
  • [48] Polynomial Time Cryptanalytic Extraction of Neural Network Models
    Canales-Martínez, Isaac A.
    Chávez-Saab, Jorge
    Hambitzer, Anna
    Rodríguez-Henríquez, Francisco
    Satpute, Nitin
    Shamir, Adi
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, 14653 LNCS : 3 - 33
  • [49] Neural network interpolation operators optimized by Lagrange polynomial?
    Wang, Guoshun
    Yu, Dansheng
    Zhou, Ping
    NEURAL NETWORKS, 2022, 153 : 179 - 191
  • [50] SWAG: A Novel Neural Network Architecture Leveraging Polynomial Activation Functions for Enhanced Deep Learning Efficiency
    Safaei, Saeid
    Woods, Zerotti
    Rasheed, Khaled
    Taha, Thiab R.
    Safaei, Vahid
    Gutierrez, Juan B.
    Arabnia, Hamid R.
    IEEE ACCESS, 2024, 12 : 73363 - 73375