A Task-Driven Scene-Aware LiDAR Point Cloud Coding Framework for Autonomous Vehicles

被引:14
|
作者
Sun, Xuebin [1 ,2 ]
Wang, Miaohui [1 ,2 ]
Du, Jingxin [3 ]
Sun, Yuxiang [4 ]
Cheng, Shing Shin [3 ]
Xie, Wuyuan [5 ]
机构
[1] Shenzhen Univ, Guangdong Key Lab Intelligent Informat Proc, Shenzhen 518060, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518172, Peoples R China
[3] Chinese Univ Hong Kong, Dept Mech & Automat Engn, Shatin, Hong Kong, Peoples R China
[4] Hong Kong Polytech Univ, Dept Mech Engn, Hong Kong, Peoples R China
[5] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud compression; Laser radar; Encoding; Task analysis; Three-dimensional displays; Feature extraction; Autonomous vehicles; LiDAR point clouds; semantic segmentation; VISION; FUSION;
D O I
10.1109/TII.2022.3221222
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
LiDAR sensors are almost indispensable for autonomous robots to perceive the surrounding environment. However, the transmission of large-scale LiDAR point clouds is highly bandwidth-intensive, which can easily lead to transmission problems, especially for unstable communication networks. Meanwhile, existing LiDAR data compression is mainly based on rate-distortion optimization, which ignores the semantic information of ordered point clouds and the task requirements of autonomous robots. To address these challenges, this article presents a task-driven Scene-Aware LiDAR Point Clouds Coding (SA-LPCC) framework for autonomous vehicles. Specifically, a semantic segmentation model is developed based on multidimension information, in which both 2-D texture and 3-D topology information are fully utilized to segment movable objects. Furthermore, a prediction-based deep network is explored to remove the spatial-temporal redundancy. The experimental results on the benchmark semantic KITTI dataset validate that our SA-LPCC achieves state-of-the-art performance in terms of the reconstruction quality and storage space for downstream tasks. We believe that SA-LPCC jointly considers the scene-aware characteristics of movable objects and removes the spatial-temporal redundancy from an end-to-end learning mechanism, which will boost the related applications from algorithm optimization to industrial products.
引用
收藏
页码:8731 / 8742
页数:12
相关论文
共 23 条
  • [21] Geometric information constraint 3D object detection from LiDAR point cloud for autonomous vehicles under adverse weather
    Qi, Yuanfan
    Liu, Chun
    Scaioni, Marco
    Li, Yanyi
    Qiao, Yihong
    Ma, Xiaolong
    Wu, Hangbin
    Zhang, Keke
    Wang, Dazhi
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 161
  • [22] Autonomous driving enhanced: a fusion framework integrating LiDAR point clouds with monovision depth-aware transformers for robust object detection
    Liu, Hui
    Su, Tong
    Guo, Jing
    ENGINEERING RESEARCH EXPRESS, 2025, 7 (01):
  • [23] A truthful double auction framework for security-driven and deadline-aware task offloading in fog-cloud environment
    Mikavica, Branka
    Kostic-Ljubisavljevic, Aleksandra
    COMPUTER COMMUNICATIONS, 2024, 217 : 183 - 199