Meta-learning in active inference

被引:0
|
作者
Penacchio, O. [1 ,2 ]
Clemente, A. [3 ]
机构
[1] Univ St Andrews, Autonomous Univ Barcelona, Comp Sci Dept, Barcelona, Spain
[2] Univ St Andrews, Sch Psychol & Neurosci, Barcelona, Spain
[3] Max Planck Inst Empir Aesthet, Dept Cognit Neuropsychol, Frankfurt, Germany
关键词
Bayesian inference; cognitive modeling; meta-learning; neural networks; rational analysis;
D O I
10.1017/S0140525X24000074
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. Although the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function that - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, that is, by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to date. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.
引用
收藏
页数:58
相关论文
共 50 条
  • [21] Combining Meta-learning and Active Selection of Datasetoids for Algorithm Selection
    Prudencio, Ricardo B. C.
    Soares, Carlos
    Ludermir, Teresa B.
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, PART I, 2011, 6678 : 164 - +
  • [22] Uncertainty Sampling Methods for Selecting Datasets in Active Meta-Learning
    Prudencio, Ricardo B. C.
    Soares, Carlos
    Ludermir, Teresa B.
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 1082 - 1089
  • [23] Bayesian Active Meta-Learning for Black-Box Optimization
    Nikoloska, Ivana
    Simeone, Osvaldo
    2022 IEEE 23RD INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATION (SPAWC), 2022,
  • [24] Meta-learning in Reinforcement Learning
    Schweighofer, N
    Doya, K
    NEURAL NETWORKS, 2003, 16 (01) : 5 - 9
  • [25] Learning to Forget for Meta-Learning
    Baik, Sungyong
    Hong, Seokil
    Lee, Kyoung Mu
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2376 - 2384
  • [26] Submodular Meta-Learning
    Adibi, Arman
    Mokhtari, Aryan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [27] Online Meta-Learning
    Finn, Chelsea
    Rajeswaran, Aravind
    Kakade, Sham
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [28] BlockMix: Meta Regularization and Self-Calibrated Inference for Metric-Based Meta-Learning
    Tang, Hao
    Li, Zechao
    Peng, Zhimao
    Tang, Jinhui
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 610 - 618
  • [29] Personalized inference for neurostimulation with meta-learning: a case study of vagus nerve stimulation
    Mao, Ximeng
    Chang, Yao-Chuan
    Zanos, Stavros
    Lajoie, Guillaume
    JOURNAL OF NEURAL ENGINEERING, 2024, 21 (01)
  • [30] SHSML: A Stochastic Approach to Hierarchically Structured Meta-Learning for Improved Inference and Confidence
    Li, Zhuoran
    Chen, Xuefeng
    Fenet, Liang
    Wu, Zhou
    Xu, Xin
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 1306 - 1311