Teaching robots to build simulations of themselves

被引:0
|
作者
Hu, Yuhang [1 ]
Lin, Jiong [1 ]
Lipson, Hod [1 ,2 ]
机构
[1] Columbia Univ, Mech Engn Dept, Creat Machines Lab, New York, NY 10027 USA
[2] Columbia Univ, Data Sci Inst, New York, NY 10027 USA
基金
美国国家科学基金会;
关键词
D O I
10.1038/s42256-025-01006-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of vision catalysed a pivotal evolutionary advancement, enabling organisms not only to perceive but also to interact intelligently with their environment. This transformation is mirrored by the evolution of robotic systems, where the ability to leverage vision to simulate and predict their own dynamics marks a leap towards autonomy and self-awareness. Humans utilize vision to record experiences and internally simulate potential actions. For example, we can imagine that, if we stand up and raise our arms, the body will form a 'T' shape without physical movement. Similarly, simulation allows robots to plan and predict the outcomes of potential actions without execution. Here we introduce a self-supervised learning framework to enable robots to model and predict their morphology, kinematics and motor control using only brief raw video data, eliminating the need for extensive real-world data collection and kinematic priors. By observing their own movements, akin to humans watching their reflection in a mirror, robots learn an ability to simulate themselves and predict their spatial motion for various tasks. Our results demonstrate that this self-learned simulation not only enables accurate motion planning but also allows the robot to detect abnormalities and recover from damage.
引用
收藏
页码:484 / 494
页数:17
相关论文
共 50 条