Risk-averting criteria for training neural networks

被引:0
|
作者
Lo, JTH [1 ]
机构
[1] Univ Maryland Baltimore Cty, Dept Math & Stat, Baltimore, MD 21228 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper shows that when a risk-averting error criterion is used to train a neural network or estimate a nonlinear regression model, as the risk-sensitivity index of the criterion increases. the domain on which the criterion is convex increases monotonically to the entire weight or parameter vector space R-N except the union of a finite number of manifolds whose dimensions are less than N. This paper also shows that inreasing the risk-sensitivity index reduces the maximum deviation of the outputs of the trained neural network or estimated regression model from the corresponding output measurements.
引用
收藏
页码:476 / 481
页数:6
相关论文
共 50 条
  • [41] Applications of neural networks in training science
    Pfeiffer, Mark
    Hohmann, Andreas
    HUMAN MOVEMENT SCIENCE, 2012, 31 (02) : 344 - 359
  • [42] Visualizations of the training process of neural networks
    Babic, Karlo
    Mestrovic, Ana
    2019 42ND INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2019, : 1619 - 1623
  • [43] Nonsmooth training of fuzzy neural networks
    C. Eitzinger
    Soft Computing, 2004, 8 : 443 - 448
  • [44] Hybrid training of optical neural networks
    Spall, James
    Guo, Xianxin
    Lvovsky, A., I
    OPTICA, 2022, 9 (07): : 803 - 811
  • [45] Training of neural networks with search behaviour
    Baskanova T.F.
    Lankin Y.P.
    Russian Physics Journal, 2002, 45 (4) : 389 - 393
  • [46] Training Invertible Neural Networks as Autoencoders
    The-Gia Leo Nguyen
    Ardizzone, Lynton
    Koethe, Ullrich
    PATTERN RECOGNITION, DAGM GCPR 2019, 2019, 11824 : 442 - 455
  • [47] Fast and Efficient and Training of Neural Networks
    Yu, Hao
    Wilamowski
    3RD INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION, 2010, : 175 - 181
  • [48] Training deep quantum neural networks
    Beer, Kerstin
    Bondarenko, Dmytro
    Farrelly, Terry
    Osborne, Tobias J.
    Salzmann, Robert
    Scheiermann, Daniel
    Wolf, Ramona
    NATURE COMMUNICATIONS, 2020, 11 (01)
  • [49] Training neural networks by stochastic optimisation
    Verikas, A
    Gelzinis, A
    NEUROCOMPUTING, 2000, 30 (1-4) : 153 - 172
  • [50] Online Normalization for Training Neural Networks
    Chiley, Vitaliy
    Sharapov, Ilya
    Kosson, Atli
    Koster, Urs
    Reece, Ryan
    de la Fuente, Sofia Samaniego
    Subbiah, Vishal
    James, Michael
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32