Local and Global Sparsity for Deep Learning Networks

被引:0
|
作者
Zhang, Long [1 ]
Zhao, Jieyu [1 ]
Shi, Xiangfu [1 ]
Ye, Xulun [1 ]
机构
[1] Ningbo Univ, Dept Comp Sci, 818 Fenghua Rd, Ningbo 315211, Peoples R China
来源
IMAGE AND GRAPHICS (ICIG 2017), PT II | 2017年 / 10667卷
基金
中国国家自然科学基金;
关键词
Sparsity; Regularization; Deep learning; GAN;
D O I
10.1007/978-3-319-71589-6_7
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
It has been proved that applying sparsity regularization in deep learning networks is an efficient approach. Researchers have developed several algorithms to control the sparseness of activation probability of hidden units. However, each of them has inherent limitations. In this paper, we firstly analyze weaknesses and strengths for popular sparsity algorithms, and categorize them into two groups: local and global sparsity. L-1/2 regularization is first time introduced as a global sparsity method for deep learning networks. Secondly, a combined solution is proposed to integrate local and global sparsity methods. Thirdly we customize proposed solution to fit in two deep learning networks: deep belief network (DBN) and generative adversarial network (GAN), and then test on benchmark datasets MNIST and CelebA. Experimental results show that our method outperforms existing sparsity algorithm on digits recognition, and achieves a better performance on human face generation. Additionally, proposed method could also stabilize GAN loss changes and eliminate noises.
引用
收藏
页码:74 / 85
页数:12
相关论文
共 50 条
  • [41] Modeling Global Dynamics from Local Snapshots with Deep Generative Neural Networks
    Gigante, Scott
    van Dijk, David
    Moon, Kevin R.
    Strzalkowski, Alexander
    Wolf, Guy
    Krishnaswamy, Smita
    2019 13TH INTERNATIONAL CONFERENCE ON SAMPLING THEORY AND APPLICATIONS (SAMPTA), 2019,
  • [42] Sparsity-Aware Caches to Accelerate Deep Neural Networks
    Ganesan, Vinod
    Sen, Sanchari
    Kumar, Pratyush
    Gala, Neel
    Veezhinathan, Kamakoti
    Raghunathan, Anand
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 85 - 90
  • [43] Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural Networks
    Xue, Anton
    Lindemann, Lars
    Robey, Alexander
    Hassani, Hamed
    Pappas, George J.
    Alur, Rajeev
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 3389 - 3396
  • [44] POSTER: Exploiting the Input Sparsity to Accelerate Deep Neural Networks
    Dong, Xiao
    Liu, Lei
    Li, Guangli
    Li, Jiansong
    Zhao, Peng
    Wang, Xueying
    Feng, Xiaobing
    PROCEEDINGS OF THE 24TH SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING (PPOPP '19), 2019, : 401 - 402
  • [45] Variance-Guided Structured Sparsity in Deep Neural Networks
    Pandit M.K.
    Banday M.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (06): : 1714 - 1723
  • [46] Acorns: A Framework for Accelerating Deep Neural Networks with Input Sparsity
    Dong, Xiao
    Liu, Lei
    Zhao, Peng
    Li, Guangli
    Li, Jiansong
    Wang, Xueying
    Feng, Xiaobing
    2019 28TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT 2019), 2019, : 178 - 191
  • [47] Sparsity-aware generalization theory for deep neural networks
    Muthukumar, Ramchandran
    Sulam, Jeremias
    THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195
  • [48] Sparsity-Aware Orthogonal Initialization of Deep Neural Networks
    Esguerra, Kiara
    Nasir, Muneeb
    Tang, Tong Boon
    Tumian, Afidalina
    Ho, Eric Tatt Wei
    IEEE ACCESS, 2023, 11 : 74165 - 74181
  • [49] Multi-Head Deep Metric Learning Using Global and Local Representations
    Ebrahimpour, Mohammad K.
    Qian, Gang
    Beach, Allison
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1340 - 1349
  • [50] Image Dehazing Algorithm Based on Deep Learning Coupled Local and Global Features
    Li, Shuping
    Yuan, Qianhao
    Zhang, Yeming
    Lv, Baozhan
    Wei, Feng
    APPLIED SCIENCES-BASEL, 2022, 12 (17):