Autonomous learning of terrain classification within imagery for robot navigation

被引:6
|
作者
Happold, Michael [1 ]
Ollis, Mark [1 ]
机构
[1] Appl Percept Inc, Pittsburgh, PA 16066 USA
关键词
D O I
10.1109/ICSMC.2006.384392
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Stereo matching in unstructured, outdoor environments is often confounded by the complexity of the scenery and thus may yield only sparse disparity maps. Two-dimensional visual imagery, on the other hand, offers dense information about the environment of mobile robots, but is often difficult to exploit. Training a supervised classifier to identify traversable regions within images that generalizes well across a large variety of environments requires a vast corpus of labeled examples. Autonomous learning of the traversable/untraversable distinction indicated by scene appearance is therefore a highly desirable goal of robot vision. We describe here a system for learning this distinction online without the involvement of a human supervisor. The system takes in imagery and range data from a pair of stereo cameras mounted on a small mobile robot and autonomously learns to produce a labeling of scenery. Supervision of the learning process is entirely through information gathered from range data. Two types of boosted weak learners, Nearest Means and naive Bayes, are trained on this autonomously labeled corpus. The resulting classified images provide dense information about the environment which can be used to fill-in regions where stereo cannot find matches or in lieu of stereo to direct robot navigation. This method has been tested across a large array of environment types and can produce very accurate labelings of scene imagery as judged by human experts and compared against purely geometric-based labelings. Because it is online and rapid, it eliminates some of the problems related to color constancy and dynamic environments.
引用
收藏
页码:260 / +
页数:2
相关论文
共 50 条
  • [31] Autonomous Mobile Robot Navigation using Machine Learning
    Song, Xiyang
    Fang, Huangwei
    Jiao, Xiong
    Wang, Ying
    2012 IEEE 6TH INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION FOR SUSTAINABILITY (ICIAFS2012), 2012, : 135 - 140
  • [32] Intelligent sensor fusion and learning for autonomous robot navigation
    Tan, KC
    Chen, YJ
    Wang, LF
    Liu, DK
    APPLIED ARTIFICIAL INTELLIGENCE, 2005, 19 (05) : 433 - 456
  • [33] Evolutionary approach to navigation learning in autonomous mobile robot
    Wang, Fei
    Kamano, Takuya
    Yasuno, Takashi
    Harada, Hironobu
    PROCEEDINGS OF 2006 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE: 50 YEARS' ACHIEVEMENTS, FUTURE DIRECTIONS AND SOCIAL IMPACTS, 2006, : 268 - 270
  • [34] Structured Kernel Subspace Learning for Autonomous Robot Navigation
    Kim, Eunwoo
    Choi, Sungjoon
    Oh, Songhwai
    SENSORS, 2018, 18 (02):
  • [35] Representing and Selecting Landmarks in Autonomous Learning of Robot Navigation
    Frommberger, Lutz
    INTELLIGENT ROBOTICS AND APPLICATIONS, PT I, PROCEEDINGS, 2008, 5314 : 488 - 497
  • [36] Terrain perception for robot navigation
    Karlsen, Robert E.
    Witus, Gary
    UNMANNED SYSTEMS TECHNOLOGY IX, 2007, 6561
  • [37] ROBOT NAVIGATION IN AN UNEXPLORED TERRAIN
    RAO, NSV
    IYENGAR, SS
    JORGENSEN, CC
    WEISBIN, CR
    JOURNAL OF ROBOTIC SYSTEMS, 1986, 3 (04): : 389 - 407
  • [38] Terrain understanding for robot navigation
    Karlsen, Robert E.
    Witus, Gary
    2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, 2007, : 901 - +
  • [39] AUTONOMOUS ROBOT NAVIGATION
    JORGENSEN, C
    HAMEL, W
    WEISBIN, C
    BYTE, 1986, 11 (01): : 223 - &
  • [40] A DEEP LEARNING APPROACH FOR OPTICAL AUTONOMOUS PLANETARY RELATIVE TERRAIN NAVIGATION
    Campbell, Tanner
    Furfaro, Roberto
    Linares, Richard
    Gaylor, David
    SPACEFLIGHT MECHANICS 2017, PTS I - IV, 2017, 160 : 3293 - 3302