Building on prior knowledge without building it in

被引:3
|
作者
Hansen, Steven S. [1 ]
Lampinen, Andrew K. [1 ]
Suri, Gaurav [2 ]
McClelland, James L. [1 ]
机构
[1] Stanford Univ, Psychol Dept, Stanford, CA 94305 USA
[2] San Francisco State Univ, Psychol Dept, San Francisco, CA 94132 USA
关键词
D O I
10.1017/S0140525X17000176
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Lake et al. propose that people rely on "start-up software," "causal models," and " intuitive theories" built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.
引用
收藏
页数:2
相关论文
共 50 条