Reducing Style Overfitting for Character Recognition via Parallel Neural Networks with Style to Content Connection

被引:1
|
作者
Tang, Wei [1 ,2 ,3 ]
Jiang, Yiwen [1 ,2 ,3 ]
Gao, Neng [3 ]
Xiang, Ji [3 ]
Shen, Jiahui [3 ]
Li, Xiang [1 ,2 ,3 ]
Su, Yijun [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, State Key Lab Informat Secur, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
[3] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
来源
2019 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2019) | 2019年
关键词
character recognition; style overfitting; neural network;
D O I
10.1109/ISPA-BDCloud-SustainCom-SocialCom48970.2019.00117
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
There is a significant style overfitting problem in neural-based character recognition: insufficient generalization ability to recognize characters with unseen styles. To address this problem, we propose a novel framework named Style-Melt Nets (SMN), which disentangles the style and content factors to extract pure content feature. In this framework, a pair of parallel style net and content net is designed to respectively infer the style labels and content labels of input character images, and the style feature produced by the style net is fed to the content net for eliminating the style influence on content feature. In addition, the marginal distribution of character pixels is considered as an important structure indicator for enhancing the content representations. Furthermore, to increase the style diversity of training data, an efficient data augmentation approach for changing the thickness of the strokes and generating outline characters is presented. Extensive experimental results demonstrate the benefit of our methods, and the proposed SMN is able to achieve the state-of-the-art performance on multiple real world character sets.
引用
收藏
页码:784 / 791
页数:8
相关论文
共 50 条
  • [21] Writing Style Adversarial Network for Handwritten Chinese Character Recognition
    Liu, Huan
    Lyu, Shujing
    Zhan, Hongjian
    Lu, Yue
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT IV, 2019, 1142 : 66 - 74
  • [22] TrueType Transformer: Character and Font Style Recognition in Outline Format
    Nagata, Yusuke
    Otao, Jinki
    Haraguchi, Daichi
    Uchida, Seiichi
    DOCUMENT ANALYSIS SYSTEMS, DAS 2022, 2022, 13237 : 18 - 32
  • [23] Style and Content Disentanglement in Generative Adversarial Networks
    Kazemi, Hadi
    Iranmanesh, Seyed Mehdi
    Nasrabadi, Nasser M.
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 848 - 856
  • [24] 3-D OBJECT RECOGNITION USING HOPFIELD-STYLE NEURAL NETWORKS
    KAWAGUCHI, T
    SETOGUCHI, T
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1994, E77D (08) : 904 - 917
  • [25] Reducing Overfitting in Deep Convolutional Neural Networks Using Redundancy Regularizer
    Wu, Bingzhe
    Liu, Zhichao
    Yuan, Zhihang
    Sun, Guangyu
    Wu, Charles
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 49 - 55
  • [26] Learning musical structure and style with neural networks
    Hörnel, D
    Menzel, W
    COMPUTER MUSIC JOURNAL, 1998, 22 (04) : 44 - 62
  • [27] Adapting Style and Content for Attended Text Sequence Recognition
    Schwarcz, Steven
    Gorban, Alexander
    Serra, Xavier Gibert
    Lee, Dar-Shyang
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 1586 - 1595
  • [28] Distilling Content from Style for Handwritten Word Recognition
    Kang, Lei
    Riba, Pau
    Rusinol, Marcal
    Fornes, Alicia
    Villegas, Mauricio
    2020 17TH INTERNATIONAL CONFERENCE ON FRONTIERS IN HANDWRITING RECOGNITION (ICFHR 2020), 2020, : 139 - 144
  • [29] CHARACTER-RECOGNITION WITH NEURAL NETWORKS
    FUKUSHIMA, K
    NEUROCOMPUTING, 1992, 4 (05) : 221 - 233
  • [30] NEURAL NETWORKS AND CHARACTER-RECOGNITION
    KARNOFSKY, K
    DR DOBBS JOURNAL, 1993, 18 (06): : 96 - &