INF: Implicit Neural Fusion for LiDAR and Camera

被引:3
|
作者
Zhou, Shuyi [1 ,2 ]
Xie, Shuxiang [1 ,2 ]
Ishikawa, Ryoichi [1 ]
Sakurada, Ken [2 ]
Onishi, Masaki [2 ]
Oishi, Takeshi [1 ]
机构
[1] Univ Tokyo, Inst Ind Sci, Tokyo, Japan
[2] Natl Inst Adv Ind Sci & Technol AIST Tokyo, Tokyo, Japan
关键词
CALIBRATION;
D O I
10.1109/IROS55552.2023.10341648
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sensor fusion has become a popular topic in robotics. However, conventional fusion methods encounter many difficulties, such as data representation differences, sensor variations, and extrinsic calibration. For example, the calibration methods used for LiDAR-camera fusion often require manual operation and auxiliary calibration targets. Implicit neural representations (INRs) have been developed for 3D scenes, and the volume density distribution involved in an INR unifies the scene information obtained by different types of sensors. Therefore, we propose implicit neural fusion (INF) for LiDAR and camera. INF first trains a neural density field of the target scene using LiDAR frames. Then, a separate neural color field is trained using camera images and the trained neural density field. Along with the training process, INF both estimates LiDAR poses and optimizes extrinsic parameters. Our experiments demonstrate the high accuracy and stable performance of the proposed method.
引用
收藏
页码:10918 / 10925
页数:8
相关论文
共 50 条
  • [1] INF3: Implicit Neural Feature Fusion Function for Multispectral and Hyperspectral Image Fusion
    Wu, Ruo-Cheng
    Deng, Shangqi
    Ran, Ran
    Dou, Hong-Xia
    Deng, Liang-Jian
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2024, 10 : 1547 - 1558
  • [2] A survey of LiDAR and camera fusion enhancement
    Zhong, Huazan
    Wang, Hao
    Wu, Zhengrong
    Zhang, Chen
    Zheng, Yongwei
    Tang, Tao
    PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE OF INFORMATION AND COMMUNICATION TECHNOLOGY, 2021, 183 : 579 - 588
  • [3] Camera and lidar fusion for pedestrian detection
    Jun, Wang
    Wu, Tao
    PROCEEDINGS 3RD IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION ACPR 2015, 2015, : 371 - 375
  • [4] LIDAR-camera fusion for road detection using fully convolutional neural networks
    Caltagirone, Luca
    Bellone, Mauro
    Svensson, Lennart
    Wande, Mattias
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 111 : 125 - 131
  • [5] Neural Approach to Coordinate Transformation for LiDAR-Camera Data Fusion in Coastal Observation
    Garczynska-Cyprysiak, Ilona
    Kazimierski, Witold
    Wlodarczyk-Sielicka, Marta
    SENSORS, 2024, 24 (20)
  • [6] CLONeR: Camera-Lidar Fusion for Occupancy Grid-Aided Neural Representations
    Carlson, Alexandra
    Ramanagopal, Manikandasriram S.
    Tseng, Nathan
    Johnson-Roberson, Matthew
    Vasudevan, Ram
    Skinner, Katherine A.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2812 - 2819
  • [7] Fully convolutional neural networks for LIDAR–camera fusion for pedestrian detection in autonomous vehicle
    J Alfred Daniel
    C Chandru Vignesh
    Bala Anand Muthu
    R Senthil Kumar
    CB Sivaparthipan
    Carlos Enrique Montenegro Marin
    Multimedia Tools and Applications, 2023, 82 : 25107 - 25130
  • [8] Vehicle Detection Based on LiDAR and Camera Fusion
    Zhang, Feihu
    Clarke, Daniel
    Knoll, Alois
    2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2014, : 1620 - 1625
  • [9] Advancements in fusion calibration technology of lidar and camera
    Wang S.
    Meng Z.
    Gao N.
    Zhang Z.
    Hongwai yu Jiguang Gongcheng/Infrared and Laser Engineering, 2023, 52 (08):
  • [10] Camera, LiDAR, and Radar Sensor Fusion Based on Bayesian Neural Network (CLR-BNN)
    Ravindran, Ratheesh
    Santora, Michael J.
    Jamali, Mohsin M.
    IEEE SENSORS JOURNAL, 2022, 22 (07) : 6964 - 6974