Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey

被引:56
|
作者
Tang, Yang [1 ]
Zhao, Chaoqiang [1 ]
Wang, Jianrui [1 ]
Zhang, Chongzhen [2 ]
Sun, Qiyu [1 ]
Zheng, Wei Xing [3 ]
Du, Wenli [1 ]
Qian, Feng [1 ]
Kurths, Juergen [4 ,5 ]
机构
[1] East China Univ Sci & Technol, Key Lab Smart Mfg Energy Chem Proc, Minist Educ, Shanghai 200237, Peoples R China
[2] Shanghai AI Lab, Shanghai 200030, Peoples R China
[3] Western Sydney Univ, Sch Comp Data & Math Sci, Sydney, NSW 2751, Australia
[4] Potsdam Inst Climate Impact Res, D-14473 Potsdam, Germany
[5] Humboldt Univ, Inst Phys, D-12489 Berlin, Germany
基金
中国国家自然科学基金;
关键词
Autonomous systems; Navigation; Learning systems; Deep learning; Visualization; Simultaneous localization and mapping; Sensors; Autonomous system; deep learning; environment perception; learning systems; navigation; reinforcement learning; VISUAL ODOMETRY; SIMULTANEOUS LOCALIZATION; DEPTH; VISION; SLAM; VERSATILE; ROBUST; STEREO; TECHNOLOGIES; CONSISTENT;
D O I
10.1109/TNNLS.2022.3167688
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autonomous systems possess the features of inferring their own state, understanding their surroundings, and performing autonomous navigation. With the applications of learning systems, like deep learning and reinforcement learning, the visual-based self-state estimation, environment perception, and navigation capabilities of autonomous systems have been efficiently addressed, and many new learning-based algorithms have surfaced with respect to autonomous visual perception and navigation. In this review, we focus on the applications of learning-based monocular approaches in ego-motion perception, environment perception, and navigation in autonomous systems, which is different from previous reviews that discussed traditional methods. First, we delineate the shortcomings of existing classical visual simultaneous localization and mapping (vSLAM) solutions, which demonstrate the necessity to integrate deep learning techniques. Second, we review the visual-based environmental perception and understanding methods based on deep learning, including deep learning-based monocular depth estimation, monocular ego-motion prediction, image enhancement, object detection, semantic segmentation, and their combinations with traditional vSLAM frameworks. Then, we focus on the visual navigation based on learning systems, mainly including reinforcement learning and deep reinforcement learning. Finally, we examine several challenges and promising directions discussed and concluded in related research of learning systems in the era of computer science and robotics.
引用
收藏
页码:9604 / 9624
页数:21
相关论文
共 50 条
  • [21] Deep Active Learning for Autonomous Navigation
    Hussein, Ahmed
    Gaber, Mohamed Medhat
    Elyan, Eyad
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EANN 2016, 2016, 629 : 3 - 17
  • [22] Learning to Perceive Objects for Autonomous Navigation
    Jing Peng
    Bir Bhanu
    Autonomous Robots, 1999, 6 : 187 - 201
  • [23] Autonomous Navigation and Sign Detector Learning
    Ellis, Liam
    Pugeault, Nicolas
    Ofjall, Kristoffer
    Hedborg, Johan
    Bowden, Richard
    Felsberg, Michael
    2013 IEEE WORKSHOP ON ROBOT VISION (WORV), 2013, : 144 - 151
  • [24] A Survey of Deep Learning Approaches for Pedestrian Detection in Autonomous Systems
    Sukkar, Majdi
    Jadeja, Rajendrasinh
    Shukla, Madhu
    Mahadeva, Rajesh
    IEEE ACCESS, 2025, 13 : 3994 - 4007
  • [25] Controller Synthesis for Autonomous Systems With Deep-Learning Perception Components
    Calinescu, Radu
    Imrie, Calum
    Mangal, Ravi
    Rodrigues, Genaina Nunes
    Pasareanu, Corina
    Santana, Misael Alpizar
    Vazquez, Gricel
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (06) : 1374 - 1395
  • [26] AUTONOMOUS MOBILE ROBOT NAVIGATION AND LEARNING
    WEISBIN, CR
    DESAUSSURE, G
    EINSTEIN, JR
    PIN, FG
    HEER, E
    COMPUTER, 1989, 22 (06) : 29 - 35
  • [27] Learning to perceive objects for autonomous navigation
    Peng, J
    Bhanu, B
    AUTONOMOUS ROBOTS, 1999, 6 (02) : 187 - 201
  • [28] Vision-based Perception for Autonomous Urban Navigation
    Bansal, Mayank
    Das, Aveek
    Kreutzer, Greg
    Eledath, Jayan
    Kumar, Rakesh
    Sawhney, Harpreet
    PROCEEDINGS OF THE 11TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, 2008, : 434 - 440
  • [29] Integrating perception and planning for autonomous navigation of urban vehicles
    Benenson, Rodrigo
    Petti, Stephane
    Fraichard, Thierry
    Parent, Michel
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 2006, : 98 - +
  • [30] A Survey on Visual Navigation and Positioning for Autonomous UUVs
    Qin, Jiangying
    Li, Ming
    Li, Deren
    Zhong, Jiageng
    Yang, Ke
    REMOTE SENSING, 2022, 14 (15)