MF-MOS: A Motion-Focused Model for Moving Object Segmentation

被引:4
|
作者
Cheng, Jintao [1 ]
Zeng, Kang [1 ]
Huang, Zhuoxu [2 ]
Tang, Xiaoyu [1 ]
Wu, Jin [3 ]
Zhang, Chengxi [4 ]
Chen, Xieyuanli [5 ]
Fan, Rui [6 ,7 ,8 ]
机构
[1] South China Normal Univ, Sch Elect & Informat Engn, Foshan 528225, Peoples R China
[2] Aberystwyth Univ, Dept Comp Sci, Aberystwyth SY23 3DB, Dyfed, Wales
[3] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[4] Jiangnan Univ, Sch Internet Things Engn, Wuxi, Jiangsu, Peoples R China
[5] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha, Peoples R China
[6] Tongji Univ, Coll Elect & Informat Engn, Shanghai Res Inst Intelligent Autonomous Syst, Shanghai 201804, Peoples R China
[7] Tongji Univ, State Key Lab Intelligent Autonomous Syst, Shanghai 201804, Peoples R China
[8] Tongji Univ, Frontiers Sci Ctr Intelligent Autonomous Syst, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICRA57147.2024.10611400
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Moving object segmentation (MOS) provides a reliable solution for detecting traffic participants and thus is of great interest in the autonomous driving field. Dynamic capture is always critical in the MOS problem. Previous methods capture motion features from the range images directly. Differently, we argue that the residual maps provide greater potential for motion information, while range images contain rich semantic guidance. Based on this intuition, we propose MF-MOS, a novel motion-focused model with a dual-branch structure for LiDAR moving object segmentation. Novelly, we decouple the spatial-temporal information by capturing the motion from residual maps and generating semantic features from range images, which are used as movable object guidance for the motion branch. Our straightforward yet distinctive solution can make the most use of both range images and residual maps, thus greatly improving the performance of the LiDAR-based MOS task. Remarkably, our MF-MOS achieved a leading IoU of 76.7% on the MOS leaderboard of the SemanticKITTI dataset upon submission, demonstrating the current state-of-the-art performance. The implementation of our MF-MOS has been released at https://github.com/SCNU-RISLAB/MF-MOS.
引用
收藏
页码:12499 / 12505
页数:7
相关论文
共 50 条
  • [1] Adaptive Hierarchical Motion-Focused Model for Video Prediction
    Tang, Min
    Wang, Wenmin
    Chen, Xiongtao
    He, Yifeng
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT I, 2018, 11164 : 579 - 588
  • [2] Moving object segmentation based on statistical motion model
    ETRI, 161 Kajong-dong, Yusunggu, Taejon 305-350, Korea, Republic of
    Electron. Lett., 20 (1719-1720):
  • [3] Moving object segmentation based on statistical motion model
    Lee, KW
    Kim, J
    ELECTRONICS LETTERS, 1999, 35 (20) : 1719 - 1720
  • [4] Segmentation of Moving Object in Video with Camera in Motion
    Vaikole, Shubhangi L.
    Sawarkar, Sudhir D.
    2015 INTERNATIONAL CONFERENCE ON NASCENT TECHNOLOGIES IN THE ENGINEERING FIELD (ICNTE), 2015,
  • [5] Compressed domain moving object segmentation in local motion scene
    2005, Science Press, Beijing, China (27):
  • [6] Moving Object Detection Using an Object Motion Reflection Model of Motion Vectors
    Yoo, Jisang
    Lee, Gyu-cheol
    SYMMETRY-BASEL, 2019, 11 (01):
  • [7] Neighborhood based codebook model for moving object segmentation
    Kanungo, P.
    Narayan, A.
    Sahoo, P. K.
    Mishra, S.
    2017 2ND INTERNATIONAL CONFERENCE ON MAN AND MACHINE INTERFACING (MAMI), 2017,
  • [8] Motion and segmentation prediction in image sequences based on moving object tracking
    Bors, AG
    Pitas, I
    1998 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING - PROCEEDINGS, VOL 3, 1998, : 663 - 667
  • [9] Moving vehicles segmentation based on Gaussian motion model
    Zhang, W
    Fang, XZ
    Lin, WY
    VISUAL COMMUNICATIONS AND IMAGE PROCESSING 2005, PTS 1-4, 2005, 5960 : 141 - 148
  • [10] MOTION SALIENCY BASED GENERATIVE ADVERSARIAL NETWORK FOR UNDERWATER MOVING OBJECT SEGMENTATION
    Patil, Prashant W.
    Thawakar, Omkar
    Dudhane, Akshay
    Murala, Subrahmanyam
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 1565 - 1569