IMPROVING CONVOLUTIONAL NEURAL NETWORKS VIA COMPACTING FEATURES

被引:0
|
作者
Zhou, Liguo [1 ,2 ]
Zhu, Rong [1 ,2 ]
Luo, Yimin [2 ,3 ]
Liu, Siwen [1 ,2 ]
Wang, Zhongyuan [1 ,2 ]
机构
[1] Wuhan Univ, Comp Sch, Natl Engn Res Ctr Multimedia Software, Wuhan, Hubei, Peoples R China
[2] Collaborat Innovat Ctr Geospatial Informat Techno, Wuhan, Hubei, Peoples R China
[3] Wuhan Univ, Remote Sensing Informat Engn Sch, Wuhan, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural networks (CNNs); Softmax loss; joint supervision; visual classification; face verification;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Convolutional neural networks (CNNs) have shown great advantages in computer vision fields, and loss functions are of great significance to their gradient descent algorithms. Softmax loss, a combination of cross-entropy loss and Softmax function, is the most commonly used one for CNNs. Hence, it can continuously increase the discernibility of sample features in classification tasks. Intuitively, to promote the discrimination of CNNs, the learned features are desirable when the inter-class separability and intra-class compactness are maximized simultaneously. Since Softmax loss hardly motivates this inter-class separability and intra-class compactness simultaneously and explicitly, we propose a new method to achieve this simultaneous maximization. This method minimizes the distance between features of homogeneous samples along with Softmax loss and thus improves CNNs' performance on vision-related tasks. Experiments on both visual classification and face verification datasets validate the effectiveness and advantages of our method.
引用
收藏
页码:2946 / 2950
页数:5
相关论文
共 50 条
  • [1] Compressing convolutional neural networks via intermediate features
    Chang Jingfei
    Lu Yang
    Xue Ping
    Wei Xing
    Wei Zhen
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 41 (02) : 2687 - 2699
  • [2] Improving Performance of Convolutional Neural Networks via Feature Embedding
    Ghoshal, Torumoy
    Zhang, Silu
    Dang, Xin
    Wilkins, Dawn
    Chen, Yixin
    PROCEEDINGS OF THE 2019 ANNUAL ACM SOUTHEAST CONFERENCE (ACMSE 2019), 2019, : 31 - 38
  • [3] IMPROVING THE ROBUSTNESS OF CONVOLUTIONAL NEURAL NETWORKS VIA SKETCH ATTENTION
    Chu, Tianshu
    Yang, Zuopeng
    Yang, Jie
    Huang, Xiaolin
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 869 - 873
  • [4] Improving Convolutional Neural Networks for Fault Diagnosis by Assimilating Global Features
    Al-Wahaibi, Saif S. S.
    Lu, Qiugang
    2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 4729 - 4734
  • [5] Visual Features for Improving Endoscopic Bleeding Detection Using Convolutional Neural Networks
    Brzeski, Adam
    Dziubich, Tomasz
    Krawczyk, Henryk
    SENSORS, 2023, 23 (24)
  • [6] Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units
    Shang, Wenling
    Sohn, Kihyuk
    Almeida, Diogo
    Lee, Honglak
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [7] Improving time series features identification by means of Convolutional Neural Networks and Recurrence Plot
    Strozzi, Fernanda
    Pozzi, Rossella
    IFAC PAPERSONLINE, 2022, 55 (10): : 601 - 606
  • [8] Staining Invariant Features for Improving Generalization of Deep Convolutional Neural Networks in Computational Pathology
    Otalora, Sebastian
    Atzori, Manfredo
    Andrearczyk, Vincent
    Khan, Amjad
    Mueller, Henning
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2019, 7 (AUG):
  • [9] Improving Precipitation Forecasts with Convolutional Neural Networks
    Badrinath, Anirudhan
    delle Monache, Luca
    Hayatbini, Negin
    Chapman, Will
    Cannon, Forest
    Ralph, Marty
    WEATHER AND FORECASTING, 2023, 38 (02) : 291 - 306
  • [10] Learning Text Component Features via Convolutional Neural Networks for Scene Text Detection
    Khlif, Wafa
    Nayef, Nibal
    Burie, Jean-Christophe
    Ogier, Jean-Marc
    Alimi, Adel
    2018 13TH IAPR INTERNATIONAL WORKSHOP ON DOCUMENT ANALYSIS SYSTEMS (DAS), 2018, : 79 - 84