Training neural networks with heterogeneous data

被引:6
|
作者
Drakopoulos, JA [1 ]
Abdulkader, A [1 ]
机构
[1] Microsoft Corp, Tablet PC Handwriting Recognit Grp, Redmond, WA 98052 USA
关键词
heterogeneous data; neural networks; training schedule; data emphasizing; boosting; growing cell structure; neural gas;
D O I
10.1016/j.neunet.2005.06.011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the 'axiomatic' basis of our theory. Both methods have been applied to a time-delay neural network-which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian.(1) (c) 2005 Elsevier Ltd. All rights reserved.
引用
收藏
页码:595 / 601
页数:7
相关论文
共 50 条
  • [1] Training Neural Networks on Noisy Data
    Rusiecki, Andrzej
    Kordos, Miroslaw
    Kaminski, Tomasz
    Gren, Krzysztof
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING ICAISC 2014, PT I, 2014, 8467 : 131 - 142
  • [2] Modeling heterogeneous data sets with neural networks
    Belanche Munoz, Lluis A.
    INNOVATIONS IN HYBRID INTELLIGENT SYSTEMS, 2007, 44 : 96 - +
  • [3] Towards Better Performance with Heterogeneous Training Data in Acoustic Modeling Using Deep Neural Networks
    Huang, Yan
    Slaney, Malcolm
    Seltzer, Michael L.
    Gong, Yifan
    15TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2014), VOLS 1-4, 2014, : 845 - 849
  • [4] Training Heterogeneous Graph Neural Networks using Bandit Sampling
    Wang, Ta-Yang
    Kannan, Rajgopal
    Prasanna, Viktor
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4345 - 4349
  • [5] Evolutionary Training of Deep Neural Networks on Heterogeneous Computing Environments
    Kalia, Subodh
    Mohan, Chilukuri K.
    Nemani, Ramakrishna
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 2318 - 2321
  • [6] PHGNN: Pre-Training Heterogeneous Graph Neural Networks
    Li, Xin
    Wei, Hao
    Ding, Yu
    IEEE ACCESS, 2024, 12 : 135411 - 135418
  • [7] Training RBF neural networks on unbalanced data
    Fu, XJ
    Wang, LP
    Chua, KS
    Chu, F
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 1016 - 1020
  • [8] Efficient training of interval Neural Networks for imprecise training data
    Sadeghi, Jonathan
    de Angelis, Marco
    Patelli, Edoardo
    NEURAL NETWORKS, 2019, 118 : 338 - 351
  • [9] Interpretable Graph Neural Networks for Heterogeneous Tabular Data
    Alkhatib, Amr
    Bostrom, Henrik
    DISCOVERY SCIENCE, DS 2024, PT I, 2025, 15243 : 310 - 324
  • [10] Data Dropout: Optimizing Training Data for Convolutional Neural Networks
    Wang, Tianyang
    Huan, Jun
    Li, Bo
    2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2018, : 39 - 46