共 1 条
NALO-VOM: Navigation-Oriented LiDAR-Guided Monocular Visual Odometry and Mapping for Unmanned Ground Vehicles
被引:4
|作者:
Hu, Ziqi
[1
,2
,3
]
Yuan, Jing
[1
,2
,3
]
Gao, Yuanxi
[1
,2
,3
]
Wang, Boran
[1
,2
,3
]
Zhang, Xuebo
[1
,2
,3
]
机构:
[1] Nankai Univ, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[2] Nankai Univ, Tianjin Key Lab Intelligent Robot, Tianjin 300350, Peoples R China
[3] Nankai Univ, Engn Res Ctr Trusted Behav Intelligence, Minist Educ, Tianjin 300350, Peoples R China
来源:
关键词:
Navigation;
Laser radar;
Visualization;
Cameras;
Simultaneous localization and mapping;
Location awareness;
Three-dimensional displays;
Navigation-oriented visual odometry;
semi-dense map building;
unmanned ground vehicles;
SCALE RECOVERY;
REAL-TIME;
D O I:
10.1109/TIV.2023.3303355
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Monocular visual odometry (VO) is a fundamental technique for unmanned ground vehicle (UGV) navigation. However, traditional monocular VO methods always suffer from sparse environment maps which cannot be directly used for navigation because of the lack of structural information. In this article, we propose a navigation-oriented LiDAR-guided monocular visual odometry and mapping (NALO-VOM) to obtain scale-consistent camera poses and a semi-dense environment map which is more suitable for navigation of UGVs. The structure representation ability of the 3D LiDAR point cloud is learned by a major-plane prediction network and then transferred into the monocular VO system in NALO-VOM. As a result, NALO-VOM can construct a more dense and high-quality map using only a monocular camera. To be specific, the major-plane prediction network is trained offline using 3D LiDAR geometric information, which predicts major-plane mask (MP-Mask) for each frame of the visual image during the localization. Then, MP-Mask is used for scale optimization and semi-dense map building. Experiments are performed on the public dataset and self-collected sequences. The results show the competitive performance on the localization accuracy and mapping quality compared with other visual odometry methods.
引用
收藏
页码:2612 / 2623
页数:12
相关论文