Interpretability of artificial neural network models in artificial intelligence versus neuroscience

被引:24
|
作者
Kar, Kohitij [1 ,2 ,3 ,4 ]
Kornblith, Simon [5 ]
Fedorenko, Evelina [2 ,3 ]
机构
[1] York Univ, Ctr Vis Res, Dept Biol, Toronto, ON, Canada
[2] MIT, McGovern Inst Brain Res, Cambridge, MA 02139 USA
[3] MIT, Dept Brain & Cognit Sci, Cambridge, MA 02139 USA
[4] MIT, Ctr Brains Minds & Machines, Cambridge, MA 02139 USA
[5] Brain Team, Google Res, Toronto, ON, Canada
关键词
D O I
10.1038/s42256-022-00592-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The notion of 'interpretability' of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.
引用
收藏
页码:1065 / 1067
页数:3
相关论文
共 50 条