Meta-learning in active inference

被引:0
|
作者
Penacchio, O. [1 ,2 ]
Clemente, A. [3 ]
机构
[1] Univ St Andrews, Autonomous Univ Barcelona, Comp Sci Dept, Barcelona, Spain
[2] Univ St Andrews, Sch Psychol & Neurosci, Barcelona, Spain
[3] Max Planck Inst Empir Aesthet, Dept Cognit Neuropsychol, Frankfurt, Germany
关键词
Bayesian inference; cognitive modeling; meta-learning; neural networks; rational analysis;
D O I
10.1017/S0140525X24000074
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. Although the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function that - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, that is, by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to date. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.
引用
收藏
页数:58
相关论文
共 50 条
  • [41] Meta-learning for fast incremental learning
    Oohira, T
    Yamauchi, K
    Omori, T
    ARTIFICAIL NEURAL NETWORKS AND NEURAL INFORMATION PROCESSING - ICAN/ICONIP 2003, 2003, 2714 : 157 - 164
  • [42] Learning to Propagate for Graph Meta-Learning
    Liu, Lu
    Zhou, Tianyi
    Long, Guodong
    Jiang, Jing
    Zhang, Chengqi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [43] Subspace Learning for Effective Meta-Learning
    Jiang, Weisen
    Kwok, James T.
    Zhang, Yu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10177 - 10194
  • [44] Meta-Learning Representations for Continual Learning
    Javed, Khurram
    White, Martha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] TOOLS AND TASKS FOR LEARNING AND META-LEARNING
    Jaworski, Barbara
    JOURNAL OF MATHEMATICS TEACHER EDUCATION, 2005, 8 (05) : 359 - 361
  • [46] Tools and Tasks for Learning and Meta-learning
    Barbara Jaworski
    Journal of Mathematics Teacher Education, 2005, 8 (5) : 359 - 361
  • [47] On meta-learning rule learning heuristics
    Janssen, Frederik
    Fuernkranz, Johannes
    ICDM 2007: PROCEEDINGS OF THE SEVENTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, 2007, : 529 - 534
  • [48] Research directions in meta-learning
    Vilalta, R
    Drissi, Y
    IC-AI'2001: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS I-III, 2001, : 354 - 360
  • [49] Towards Explainable Meta-learning
    Woznica, Katarzyna
    Biecek, Przemyslaw
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 505 - 520
  • [50] Meta-Learning to Compositionally Generalize
    Conklin, Henry
    Wang, Bailin
    Smith, Kenny
    Titov, Ivan
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3322 - 3335