Controlled Dropout: a Different Dropout for Improving Training Speed on Deep Neural Network

被引:0
|
作者
Ko, ByungSoo [1 ]
Kim, Han-Gyu [1 ]
Choi, Ho-Jin [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, 291 Daehak Ro, Daejeon, South Korea
来源
2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) | 2017年
关键词
Dropout; deep neural network; training speed improvement;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dropout is a technique widely used for preventing overfitting while training deep neural networks. However, applying dropout to a neural network typically increases the training time. This paper proposes a different dropout approach called controlled dropout that improves training speed by dropping units in a column-wise or row-wise manner on the matrices. In controlled dropout, a network is trained using compressed matrices of smaller size, which results in notable improvement of training speed. In the experiment on feed-forward neural networks for MNIST data set and convolutional neural networks for CIFAR-10 and SVHN data sets, our proposed method achieves faster training speed than conventional methods both on CPU and GPU, while exhibiting the same regularization performance as conventional dropout. Moreover, the improvement of training speed increases when the number of fully-connected layers increases. As the training process of neural network is an iterative process comprising forward propagation and backpropagation, speed improvement using controlled dropout would provide a significantly decreased training time.
引用
收藏
页码:972 / 977
页数:6
相关论文
共 50 条
  • [1] Controlled Dropout: a Different Approach to Using Dropout on Deep Neural Network
    Ko, ByungSoo
    Kim, Han-Gyu
    Oh, Kyo-Joong
    Choi, Ho-Jin
    2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2017, : 358 - 362
  • [2] Rademacher dropout: An adaptive dropout for deep neural network via optimizing generalization gap
    Wang, Haotian
    Yang, Wenjing
    Zhao, Zhenyu
    Luo, Tingjin
    Wang, Ji
    Tang, Yuhua
    NEUROCOMPUTING, 2019, 357 : 177 - 187
  • [3] Supervision dropout: guidance learning in deep neural network
    Liang Zeng
    Hao Zhang
    Yanyan Li
    Maodong Li
    Shanshan Wang
    Multimedia Tools and Applications, 2023, 82 : 18831 - 18850
  • [4] Supervision dropout: guidance learning in deep neural network
    Zeng, Liang
    Zhang, Hao
    Li, Yanyan
    Li, Maodong
    Wang, Shanshan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (12) : 18831 - 18850
  • [5] IMPROVING DEEP NEURAL NETWORKS BY USING SPARSE DROPOUT STRATEGY
    Zheng, Hao
    Chen, Mingming
    Liu, Wenju
    Yang, Zhanlei
    Liang, Shan
    2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP), 2014, : 21 - 26
  • [6] Improving Neural Network Generalization by Combining Parallel Circuits with Dropout
    Phan, Kien Tuong
    Maul, Tomas Henrique
    Vu, Tuong Thuy
    Lai, Weng Kin
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT III, 2016, 9949 : 572 - 580
  • [7] Anomaly Detection Approach Based on Deep Neural Network and Dropout
    Hussien, Zaid Khalaf
    Dhannoon, Ban N.
    BAGHDAD SCIENCE JOURNAL, 2020, 17 (02) : 701 - 709
  • [8] Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network
    Poernomo, Alvin
    Kang, Dae-Ki
    NEURAL NETWORKS, 2018, 104 : 60 - 67
  • [9] Optical random phase dropout in a diffractive deep neural network
    Xiao, Yong-Liang
    Li, Sikun
    Situ, Guohai
    You, Zhisheng
    OPTICS LETTERS, 2021, 46 (20) : 5260 - 5263
  • [10] Weighted Channel Dropout for Regularization of Deep Convolutional Neural Network
    Hou, Saihui
    Wang, Zilei
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 8425 - 8432