Transformer-based two-source motion model for multi-object tracking

被引:0
|
作者
Jieming Yang
Hongwei Ge
Shuzhi Su
Guoqing Liu
机构
[1] Jiangnan University,School of Artificial Intelligence and Computer Science
[2] Jiangnan University,Key Laboratory of Advanced Process Control for Light Industry
[3] Ministry of Education,School of Computer Science and Engineering
[4] Anhui University of Science & Technology,undefined
来源
Applied Intelligence | 2022年 / 52卷
关键词
Deep learning; Neural network; Computer vision; Multi-object tracking; Motion model;
D O I
暂无
中图分类号
学科分类号
摘要
Recently, benefit from the development of detection models, the multi-object tracking method based on tracking-by-detection has greatly improved performance. However, most methods still utilize traditional motion models for position prediction, such as the constant velocity model and Kalman filter. Only a few methods adopt deep network-based methods for prediction. Still, these methods only exploit the simplest RNN(Recurrent Neural Network) to predict the position, and the position offset caused by the camera movement is not considered. Therefore, inspired by the outstanding performance of Transformer in temporal tasks, this paper proposes a Transformer-based motion model for multi-object tracking. By taking the historical position difference of the target and the offset vector between consecutive frames as input, the model considers the motion of the target itself and the camera at the same time, which improves the prediction accuracy of the motion model used in the multi-target tracking method, thereby improving tracking performance. Through comparative experiments and tracking results on MOTchallenge benchmarks, the effectiveness of the proposed method is proved.
引用
收藏
页码:9967 / 9979
页数:12
相关论文
共 50 条
  • [1] Transformer-based two-source motion model for multi-object tracking
    Yang, Jieming
    Ge, Hongwei
    Su, Shuzhi
    Liu, Guoqing
    APPLIED INTELLIGENCE, 2022, 52 (09) : 9967 - 9979
  • [2] UniTracker: transformer-based CrossUnihead for multi-object tracking
    Wu, Fan
    Zhang, Yifeng
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2024, 21 (04)
  • [3] Transformer-Based Multi-object Tracking in Unmanned Aerial Vehicles
    Li, Jiaxin
    Li, Hongjun
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VI, 2024, 14430 : 347 - 358
  • [4] MotionFormer: An Improved Transformer-Based Architecture for Multi-object Tracking
    Agrawal, Harshit
    Halder, Agrya
    Chattopadhyay, Pratik
    COMPUTER VISION AND IMAGE PROCESSING, CVIP 2023, PT III, 2024, 2011 : 212 - 224
  • [5] More Efficient Encoder: Boosting Transformer-Based Multi-object Tracking Performance Through YOLOX
    Zheng, Le
    Mao, Yaobin
    Zheng, Mengjin
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XII, 2025, 15042 : 376 - 389
  • [6] MotionTrack: End-to-End Transformer-based Multi-Object Tracking with LiDAR-Camera Fusion
    Zhang, Ce
    Zhang, Chengjie
    Guo, Yiluan
    Chen, Lingji
    Happold, Michael
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2023, : 151 - 160
  • [7] MO-Transformer: A Transformer-Based Multi-Object Point Cloud Reconstruction Network
    Lyu, Erli
    Zhang, Zhengyan
    Liu, Wei
    Wang, Jiaole
    Song, Shuang
    Meng, Max Q. -H.
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 1024 - 1030
  • [8] A Unified Object Motion and Affinity Model for Online Multi-Object Tracking
    Yin, Junbo
    Wang, Wenguan
    Meng, Qinghao
    Yang, Ruigang
    Shen, Jianbing
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6767 - 6776
  • [9] A Transformer-Based Network for Hyperspectral Object Tracking
    Gao, Long
    Chen, Langkun
    Liu, Pan
    Jiang, Yan
    Xie, Weiying
    Li, Yunsong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [10] Multi-Object Tracking Algorithm Based on CNN-Transformer Feature Fusion
    Zhang, Yingjun
    Bai, Xiaohui
    Xie, Binhong
    Computer Engineering and Applications, 2024, 60 (02) : 180 - 190