Incremental learning for robots: A survey

被引:0
|
作者
Ma X.-M. [1 ]
Xu D. [1 ,2 ]
机构
[1] School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing
[2] CAS Engineering Laboratory for Intelligent Industrial Vision, Institute of Automation, Chinese Academy of Sciences, Beijing
来源
Kongzhi yu Juece/Control and Decision | 2024年 / 39卷 / 05期
关键词
hybrid method; incremental learning; robot; skill learning; variable model method; variable parameter method;
D O I
10.13195/j.kzyjc.2023.0631
中图分类号
学科分类号
摘要
Nowadays, the application scenarios of robots are constantly updated, and the amount of data is also growing. Traditional machine learning methods are difficult to adapt to the dynamic environment. Incremental learning technology simulates the human learning process, enabling robots to use old knowledge to speed up the learning of new tasks and learn new skills without forgetting old skills. Currently, there is still relatively little research on robot incremental learning. This paper mainly introduces the research progress of robot incremental learning. Firstly, a brief introduction to incremental learning is given. Secondly, from the perspective of parameters and models, this paper classifies the current mainstream methods of robot incremental learning into three categories: variable parameter methods, variable model methods and hybrid methods, which are discussed in details, separately. Furthermore, the corresponding application examples of incremental learning technology in the field of robotics are provided. Thirdly, the data sets and evaluation metrics commonly used in incremental learning are introduced. Finally, the future development trends are prospected. © 2024 Northeast University. All rights reserved.
引用
收藏
页码:1409 / 1423
页数:14
相关论文
共 130 条
  • [71] Finn C, Abbeel P, Levine S., Model-agnostic meta-learning for fast adaptation of deep networks, Proceedings of the 34th International Conference on Machine Learning, pp. 1126-1135, (2017)
  • [72] Henning C, Cervera M, D'Angelo F, Et al., Posterior meta-replay for continual learning, Proceedings of the 35th International Conference on Neural Information Processing Systems(NIPS), pp. 14135-14149, (2021)
  • [73] Wang K, Liu X L, Bagdanov A, Et al., Incremental meta-learning via episodic replay distillation for few-shot image recognition, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 3728-3738, (2022)
  • [74] Rusu A A, Rabinowitz N C, Desjardins G, Et al., Progressive neural networks, (2016)
  • [75] Aljundi R, Chakravarty P, Tuytelaars T., Expert gate: Lifelong learning with a network of experts, 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 7120-7129, (2017)
  • [76] Draelos T J, Miner N E, Lamb C C, Et al., Neurogenesis deep learning: Extending deep networks to accommodate new classes, 2017 International Joint Conference on Neural Networks, pp. 526-533, (2017)
  • [77] Yoon J, Yang E, Lee J, Et al., Lifelong learning with dynamically expandable networks, (2017)
  • [78] Mallya A, Lazebnik S., PackNet: Adding multiple tasks to a single network by iterative pruning, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7765-7773, (2018)
  • [79] Mallya A, Davis D, Lazebnik S., Piggyback: Adapting a single network to multiple tasks by learning to mask weights, (2018)
  • [80] Serra J, Suris D, Miron M, Et al., Overcoming catastrophic forgetting with hard attention to the task, Proceedings of the 35th International Conference on Machine Learning (ICML), pp. 4548-4557, (2018)