Road Pothole Recognition and Size Measurement Based on the Fusion of Camera and LiDAR

被引:0
|
作者
Cai, Yongxiang [1 ]
Deng, Mingxing [2 ,3 ]
Xu, Xin [2 ]
Wang, Wei [1 ]
Xu, Xiaowei [2 ,3 ]
机构
[1] CATARC Tianjin Automot Engn Res Inst Co Ltd, Tianjin 300300, Peoples R China
[2] Wuhan Univ Sci & Technol, Sch Automobile & Traff Engn, Wuhan 430065, Peoples R China
[3] Hubei Prov Engn Res Ctr Adv Chassis Technol New En, Wuhan 430065, Peoples R China
来源
IEEE ACCESS | 2025年 / 13卷
基金
中国国家自然科学基金;
关键词
Laser radar; Point cloud compression; Roads; Feature extraction; Accuracy; Three-dimensional displays; Image segmentation; Data mining; Size measurement; Cameras; Point cloud clustering; road pothole recognition; road pothole size measurement; roughness characteristics; the fusion of camera and LiDAR;
D O I
10.1109/ACCESS.2025.3549835
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In intelligent driving systems, accurate detection and measurement of the three-dimensional (3D) dimensions of road potholes are essential for optimizing decision-making processes, including deceleration and obstacle avoidance. To address the challenges posed by unreliable depth information from images and the impracticality of large LiDAR point cloud data in real-time applications-issues that can result in false positives or missed detections-we propose a fusion-based approach that integrates images and LiDAR point clouds for measuring pothole dimensions. Initially, we extract ground LiDAR point clouds from raw data through statistical filtering and ground segmentation. Subsequently, we generate a frustum as a two-dimensional (2D) region of interest to identify potential pothole areas within the ground LiDAR point clouds. We then utilize roughness feature description and the Mean-shift clustering algorithm to extract precise sets of pothole points, which allows us to determine the depth, length, width, and relative coordinates of the potholes with respect to the vehicle. Finally, experiments conducted using the open-source KITTI Road dataset and real vehicle data reveal that our method accurately delineates pothole contours, achieving a 27.4% improvement in accuracy over single LiDAR methods and reducing the average processing time by 88.2%. In practical scenarios, the relative error in size measurement is generally within 15%, with an average data processing time of 45.6 ms per frame, thereby satisfying the system's real-time requirement of 100 ms per frame.
引用
收藏
页码:46210 / 46227
页数:18
相关论文
共 50 条
  • [41] Camera-LiDAR Fusion With Latent Correlation for Cross-Scene Place Recognition
    Pan, Yan
    Xie, Jiapeng
    Wu, Jiajie
    Zhou, Bo
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2025, 72 (03) : 2801 - 2809
  • [42] LiDAR-camera fusion for road detection using a recurrent conditional random field model
    Wang, Lele
    Huang, Yingping
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [43] Lidar-based road terrain recognition for passenger vehicles
    Wang, Shifeng
    Kodagoda, Sarath
    Shi, Lei
    Xu, Ning
    INTERNATIONAL JOURNAL OF VEHICLE DESIGN, 2017, 74 (02) : 153 - 165
  • [44] Nontarget-based displacement measurement using LiDAR and camera
    Lee, Sahyeon
    Kim, Hyunjun
    Sim, Sung -Han
    AUTOMATION IN CONSTRUCTION, 2022, 142
  • [45] Target Fusion Detection of LiDAR and Camera Based on the Improved YOLO Algorithm
    Han, Jian
    Liao, Yaping
    Zhang, Junyou
    Wang, Shufeng
    Li, Sixian
    MATHEMATICS, 2018, 6 (10)
  • [46] 3D Vehicle Detection Based on LiDAR and Camera Fusion
    Cai, Yingfeng
    Zhang, Tiantian
    Wang, Hai
    Li, Yicheng
    Liu, Qingchao
    Chen, Xiaobo
    AUTOMOTIVE INNOVATION, 2019, 2 (04) : 276 - 283
  • [47] Gmapping Mapping Based on Lidar and RGB-D Camera Fusion
    Li, Quanfeng
    Wu, Haibo
    Chen, Jiang
    Zhang, Yixiao
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (12)
  • [48] 3D Vehicle Detection Based on LiDAR and Camera Fusion
    Yingfeng Cai
    Tiantian Zhang
    Hai Wang
    Yicheng Li
    Qingchao Liu
    Xiaobo Chen
    Automotive Innovation, 2019, 2 : 276 - 283
  • [49] Localization and mapping algorithm based on Lidar-IMU-Camera fusion
    Zhao, Yibing
    Liang, Yuhe
    Ma, Zhenqiang
    Guo, Lie
    Zhang, Hexin
    JOURNAL OF INTELLIGENT AND CONNECTED VEHICLES, 2024, 7 (02) : 97 - 107
  • [50] INF: Implicit Neural Fusion for LiDAR and Camera
    Zhou, Shuyi
    Xie, Shuxiang
    Ishikawa, Ryoichi
    Sakurada, Ken
    Onishi, Masaki
    Oishi, Takeshi
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 10918 - 10925