Training Strategies for Convolutional Neural Networks with Transformed Input

被引:0
|
作者
Khandani, Masoumeh Kalantari [1 ]
Mikhael, Wasfy B. [1 ]
机构
[1] Univ Cent Florida, Dept Elect & Comp Engn, Orlando, FL 32816 USA
关键词
image classification; Convolutional neural networks; DCT; DWT; domain transforms;
D O I
10.1109/MWSCAS47672.2021.9531913
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional Neural Network (CNN) are now considered the main tool for image classification. However, most networks studied for classification are large and have extensive computing and storage requirements. Their training time is also usually very long. Such costly computational and storage requirements cannot be met in many applications with simple devices such as small processors or Internet of things (IoTs) devices. Therefore, reducing the size of networks and input sizes become necessary. However, such reductions are not easy and may reduce the classification performance. We examine how domain transforms under different training strategies can be used for efficient size reduction and improvement of classification accuracy. In this paper, we consider networks with under 220K learnable parameters, as opposed to millions in deeper networks. We show that by representing the input to a CNN using appropriately selected domain transforms, such as discrete wavelet transforms (DWT) or discrete cosine transform (DCT), it is possible to efficiently improve the performance of size-reduced networks. For example, DWT proves to be very effective when significant size reduction is needed (improving the result by up to 9%). It is also shown that by tuning training strategies such as the number of epochs and mini batch size, the performance can be further improved by up to 4% under fixed training time.
引用
收藏
页码:1058 / 1061
页数:4
相关论文
共 50 条
  • [41] Kinematic training of convolutional neural networks for particle image velocimetry
    Manickathan, Lento
    Mucignat, Claudio
    Lunati, Ivan
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2022, 33 (12)
  • [42] Automated Training of Deep Convolutional Neural Networks for Cell Segmentation
    Sajith Kecheril Sadanandan
    Petter Ranefall
    Sylvie Le Guyader
    Carolina Wählby
    Scientific Reports, 7
  • [43] Layer-Wise Compressive Training for Convolutional Neural Networks
    Grimaldi, Matteo
    Tenace, Valerio
    Calimera, Andrea
    FUTURE INTERNET, 2019, 11 (01)
  • [44] Data Dropout: Optimizing Training Data for Convolutional Neural Networks
    Wang, Tianyang
    Huan, Jun
    Li, Bo
    2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2018, : 39 - 46
  • [45] PairTraining: A method for training Convolutional Neural Networks with image pairs
    Shi, Yuhong
    Zhao, Yan
    Yao, Chunlong
    AI COMMUNICATIONS, 2023, 36 (02) : 111 - 126
  • [46] Training Convolutional Neural Networks with Competitive Hebbian Learning Approaches
    Lagani, Gabriele
    Falchi, Fabrizio
    Gennaro, Claudio
    Amato, Giuseppe
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE (LOD 2021), PT I, 2022, 13163 : 25 - 40
  • [47] Designing convolutional neural networks with constrained evolutionary piecemeal training
    Sapra, Dolly
    Pimentel, Andy D.
    APPLIED INTELLIGENCE, 2022, 52 (15) : 17103 - 17117
  • [48] Evaluation of Multiple Convolutional Neural Networks in Training the NVIDIA JetBot
    Leigh, Preston
    Xing, Yuan
    2022 IEEE 13TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2022, : 524 - 530
  • [49] Gaussian mixture models for training Bayesian convolutional neural networks
    Mostafa, Bakhouya
    Hassan, Ramchoun
    Mohammed, Hadda
    Tawfik, Masrour
    EVOLUTIONARY INTELLIGENCE, 2024, 17 (04) : 2515 - 2536
  • [50] Robust Mixture-of-Expert Training for Convolutional Neural Networks
    Zhang, Yihua
    Cai, Ruisi
    Chen, Tianlong
    Zhang, Guanhua
    Zhang, Huan
    Chen, Pin-Yu
    Chang, Shiyu
    Wang, Zhangyang
    Liu, Sijia
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 90 - 101