Impartial competitive learning in multi-layered neural networks

被引:0
|
作者
Kamimura, Ryotaro [1 ,2 ,3 ]
机构
[1] Tokai Univ, IT Educ Ctr, Hiratsuka, Kanagawa, Japan
[2] Kumamoto Drone Technol & Dev, Nishi Ku, Kamimatsuo, Kumamoto, Japan
[3] Kumamoto Drone Technol & Dev, Nishi Ku, Kamimatsuo, Kumamoto 8615289, Japan
关键词
Impartial; competitive learning; componential competition; computational competition; collective competition; cost; interpretation; MUTUAL INFORMATION; ENGLISH;
D O I
10.1080/09540091.2023.2174079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The present paper aims to propose a new learning and interpretation method called "impartial competitive learning", meaning that all participants in a competition should be winners. Due to its importance, the impartiality is forced to be realised even by increasing the corresponding cost in terms of the strength of weights. For the first approximation, three types of impartial competition can be developed: componential, computational, and collective competition. In the componential competition, every weight should have an equal chance on average to win the competition. In the computational competition, all computational procedures should have an equal chance to be applied sequentially in learning. In collective computing for interpretation, all network configurations, obtained by learning, have an equal chance to participate in a process of interpretation, representing one of the most idealised forms of impartiality. The method was applied to a well-known second-language-learning data set. The intuitive conclusion, stressed in the specific science, could not be extracted by the conventional natural language processing methods, because they can deal only with word frequency. The present method tried to extract a main feature beyond the word frequency by competing connection weights and computational procedures impartially, followed by collective and impartial competition for interpretation.
引用
收藏
页数:33
相关论文
共 50 条
  • [21] Artificial vision by multi-layered neural networks: Neocognitron and its advances
    Fukushima, Kunihiko
    NEURAL NETWORKS, 2013, 37 : 103 - 119
  • [22] Mutual Information Maximization for Improving and Interpreting Multi-Layered Neural Networks
    Kamimura, Ryotaro
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017,
  • [23] Frequency Estimation from Waveforms using Multi-Layered Neural Networks
    Verma, Prateek
    Schafer, Ronald W.
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 2165 - 2169
  • [24] Automatic classification of volcanic earthquakes by using multi-layered neural networks
    Falsaperla, S
    Graziani, S
    Nunnari, G
    Spampinato, S
    NATURAL HAZARDS, 1996, 13 (03) : 205 - 228
  • [25] Connective Potential Information for Collectively Interpreting Multi-Layered Neural Networks
    Kamimura, Ryotaro
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 3033 - 3042
  • [26] Artificial vision by multi-layered neural networks: Neocognitron and its advances
    Fukushima, Kunihiko
    Neural Networks, 2013, 37 : 103 - 119
  • [27] Active noise control using multi-layered perceptron neural networks
    Tokhi, MO
    Wood, R
    JOURNAL OF LOW FREQUENCY NOISE VIBRATION AND ACTIVE CONTROL, 1997, 16 (02): : 109 - 144
  • [28] Fast learning algorithms and their simulations for multi-layered feedforward neural network
    Liang, Min
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering & Electronics, 1993, 15 (09):
  • [29] Rank-based hebbian learning in a multi-layered neural network
    Vaccaro, J
    Gourion, D
    Samuelides, M
    Thorpe, S
    NINTH WORKSHOP ON VIRTUAL INTELLIGENCE/DYNAMIC NEURAL NETWORKS: ACADEMIC/INDUSTRIAL/NASA/DEFENSE TECHNICAL INTERCHANGE AND TUTORIALS, 1999, 3728 : 301 - 315
  • [30] Multi-level selective potentiality maximization for interpreting multi-layered neural networks
    Kamimura, Ryotaro
    APPLIED INTELLIGENCE, 2022, 52 (12) : 13961 - 13986