Vision-Based Road-Following Using Proportional Navigation

被引:12
|
作者
Holt, Ryan S. [1 ]
Beard, Randal W. [2 ]
机构
[1] MIT, Lincoln Lab, Lexington, MA 02420 USA
[2] Brigham Young Univ, Dept Elect & Comp Engn, Provo, UT 84602 USA
基金
美国国家科学基金会; 美国国家航空航天局;
关键词
Unmanned air vehicles; Vision based guidance; Proportional navigation; Road following;
D O I
10.1007/s10846-009-9353-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper describes a new approach for autonomous road following for an unmanned air vehicle (UAV) using a visual sensor. A road is defined as any continuous, extended, curvilinear feature, which can include city streets, highways, and dirt roads, as well as forest-fire perimeters, shorelines, and fenced borders. To achieve autonomous road-following, this paper utilizes Proportional Navigation as the basis for the guidance law, where visual information is directly fed back into the controller. The tracking target for the Proportional Navigation algorithm is chosen as the position on the edge of the camera frame at which the road flows into the image. Therefore, each frame in the video stream only needs to be searched on the edge of the frame, thereby significantly reducing the computational requirements of the computer vision algorithms. The tracking error defined in the camera reference frame shows that the Proportional Navigation guidance law results in a steady-state error caused by bends and turns in the road, which are perceived as road motion. The guidance algorithm is therefore adjusted using Augmented Proportional Navigation Guidance to account for the perceived road accelerations and to force the steady-state error to zero. The effectiveness of the solution is demonstrated through high-fidelity simulations, and with flight tests using a small autonomous UAV.
引用
收藏
页码:193 / 216
页数:24
相关论文
共 50 条
  • [41] Vision-based Navigation in Indoor Environments without using Image Database
    Lee, Hyunho
    Kim, Jaehun
    Kim, Chulki
    Seo, Minah
    Lee, Seok
    Hur, Soojung
    Lee, Taikjin
    PROCEEDINGS OF THE 27TH INTERNATIONAL TECHNICAL MEETING OF THE SATELLITE DIVISION OF THE INSTITUTE OF NAVIGATION (ION GNSS 2014), 2014, : 235 - 242
  • [42] Vision-Based Navigation of Omnidirectional Mobile Robots
    Ferro, Marco
    Paolillo, Antonio
    Cherubini, Andrea
    Vendittelli, Marilena
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (03) : 2691 - 2698
  • [43] Detecting and diagnosing mistakes in vision-based navigation
    Stuck, ER
    ROBOTICS AND AUTONOMOUS SYSTEMS, 1996, 17 (04) : 259 - 285
  • [44] Vision-Based Navigation through Urban Canyons
    Hrabar, Stefan
    Sukhatme, Gaurav
    JOURNAL OF FIELD ROBOTICS, 2009, 26 (05) : 431 - 452
  • [45] Robust signboard recognition for vision-based navigation
    Graduate School of Engineering, Osaka City University, Sugimoto 3-3-138, Sumiyoshi-ku, Osaka 558-8585, Japan
    不详
    Kyokai Joho Imeji Zasshi, 2007, 8 (1192-1200):
  • [46] Research on RFID and Vision-based AGV Navigation
    Man, Z. G.
    Ye, W. H.
    Zhao, P.
    Lou, P. H.
    Wu, T. J.
    FRONTIER IN FUNCTIONAL MANUFACTURING TECHNOLOGIES, 2010, 136 : 298 - 302
  • [47] Vision-based maze navigation for humanoid robots
    Paolillo, Antonio
    Faragasso, Angela
    Oriolo, Giuseppe
    Vendittelli, Marilena
    AUTONOMOUS ROBOTS, 2017, 41 (02) : 293 - 309
  • [48] Vision-based Mobile Robot Navigation Using Active Learning Concept
    Ju, Ming-Yi
    Lee, Ji-Rong
    2013 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND INTELLIGENT SYSTEMS (ARIS), 2013, : 122 - 129
  • [49] Vision-based vehicle navigation using the fluorescent lamp array on the ceiling
    Hashimoto, Takeshi
    Yamamoto, Shigehiro
    Aso, Takehiko
    Abe, Minoru
    Memoirs of the Faculty of Engineering, Kyoto University, 1993, 55 (pt 1): : 37 - 48
  • [50] Vision-based navigation from wheels to wings
    Zufferey, JC
    Beyeler, A
    Floreano, D
    IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2003, : 2968 - 2973