Explainable neural networks that simulate reasoning

被引:24
|
作者
Blazek, Paul J. [1 ,2 ,3 ]
Lin, Milo M. [1 ,2 ,3 ,4 ]
机构
[1] Univ Texas Southwestern Med Ctr Dallas, Green Ctr Syst Biol, Dallas, TX 75390 USA
[2] Univ Texas Southwestern Med Ctr Dallas, Dept Bioinformat, Dallas, TX 75390 USA
[3] Univ Texas Southwestern Med Ctr Dallas, Dept Biophys, Dallas, TX 75390 USA
[4] Univ Texas Southwestern Med Ctr Dallas, Ctr Alzheimers & Neurodegenerat Dis, Dallas, TX 75390 USA
来源
NATURE COMPUTATIONAL SCIENCE | 2021年 / 1卷 / 09期
关键词
SPARSENESS; ALGORITHMS; RESPONSES; DESIGN; MODELS;
D O I
10.1038/s43588-021-00132-w
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.
引用
收藏
页码:607 / 618
页数:12
相关论文
共 50 条
  • [1] Explainable neural networks that simulate reasoning
    Paul J. Blazek
    Milo M. Lin
    Nature Computational Science, 2021, 1 : 607 - 618
  • [2] Forward Composition Propagation for Explainable Neural Reasoning
    Grau, Isel
    Napoles, Gonzalo
    Bello, Marilyn
    Salgueiro, Yamisleydi
    Jastrzebska, Agnieszka
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2024, 19 (01) : 26 - 35
  • [3] Faithfully Explainable Recommendation via Neural Logic Reasoning
    Zhu, Yaxin
    Xian, Yikun
    Fu, Zuohui
    de Melo, Gerard
    Zhang, Yongfeng
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3083 - 3090
  • [4] The Road to Explainable Graph Neural Networks
    Ranu, Sayan
    SIGMOD RECORD, 2024, 53 (03)
  • [5] Pattern reasoning: Logical reasoning of neural networks
    Tsukimoto, Hiroshi
    Systems and Computers in Japan, 2001, 32 (02) : 1 - 10
  • [6] Explainable Neural Networks: Achieving Interpretability in Neural Models
    Chakraborty, Manomita
    ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2024, 31 (06) : 3535 - 3550
  • [7] Feature-Enhanced Neural Collaborative Reasoning for Explainable Recommendation
    Zhang, Xiaoyu
    Shi, Shaoyun
    Li, Yishan
    Ma, Weizhi
    Sun, Peijie
    Zhang, Min
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 43 (01)
  • [8] An explanation of reasoning neural networks
    Tsaih, RR
    MATHEMATICAL AND COMPUTER MODELLING, 1998, 28 (02) : 37 - 44
  • [9] Reasoning with neural logic networks
    Yasdi, R
    NEW DIRECTIONS IN ROUGH SETS, DATA MINING, AND GRANULAR-SOFT COMPUTING, 1999, 1711 : 343 - 351
  • [10] Neural networks for abstraction and reasoning
    Bober-Irizar, Mikel
    Banerjee, Soumya
    SCIENTIFIC REPORTS, 2024, 14 (01):