A Pedestrian Detection Model Based on Binocular Information Fusion

被引:2
|
作者
Zhang, Juan [1 ]
Ma, Zhonggui [1 ]
Nuermaimaiti, Nuerxiati [1 ]
机构
[1] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing, Peoples R China
关键词
Pedestrian detection; deep learning; binocular vision; PSMNet; Faster R-CNN;
D O I
10.1109/wocc.2019.8770601
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Pedestrian detection, as a special kind of target detection, is a research hotspot in the field of image processing and computer vision. Because monocular vision cannot obtain the depth information of the image, it cannot meet the accuracy requirements of pedestrian detection. In order to solve these problems, a new cascading pedestrian detection model based on PSMNet binocular information fusion and improved faster R-CNN pedestrian detection model is proposed. Firstly, binocular images are fed into the original PSMNet binocular information fusion module to get the disparity map, and then left and right images are fused by the disparity map to get the fusion image. Secondly, in the improved faster R-CNN pedestrian detection module, the left, right and fusion image of one frame are as separate inputs, and pedestrian detection is carried out respectively. Finally, the detection results of the three channels are passed through the target consistency validation module, and the verified pedestrian detection target is as the final output detection result. The simulation results show that the accuracy and recall rate of the cascading model are improved, the missed detection rate is reduced to 13.42%, and the accuracy rate is 88.58%.
引用
收藏
页码:13 / 17
页数:5
相关论文
共 50 条
  • [41] A Pedestrian Detection Network Based on an Attention Mechanism and Pose Information
    Jiang, Zhaoyin
    Huang, Shucheng
    Li, Mingxing
    APPLIED SCIENCES-BASEL, 2024, 14 (18):
  • [42] Parallel binocular stereo-vision-based GPU accelerated pedestrian detection and distance computation
    Jiaojiao Li
    Jiaji Wu
    Yang You
    Gwanggil Jeon
    Journal of Real-Time Image Processing, 2020, 17 : 447 - 457
  • [43] Orchard Pedestrian Detection and Location Based on Binocular Camera and Improved YOLOv3 Algorithm
    Jing L.
    Wang R.
    Liu H.
    Shen Y.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 (09): : 34 - 39and25
  • [44] Pedestrian Detection and Tracking Based on Far Infrared Visual Information
    Olmeda, Daniel
    Hilario, Cristina
    de la Escalera, Arturo
    Armingol, Jose M.
    ADVANCED CONCEPTS FOR INTELLIGENT VISION SYSTEMS, PROCEEDINGS, 2008, 5259 : 958 - 969
  • [45] Vocal Effort Detection Based on Spectral Information Entropy Feature and Model Fusion
    Chao, Hao
    Lu, Bao-Yun
    Liu, Yong-Li
    Zhi, Hui-Lai
    JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2018, 14 (01): : 218 - 227
  • [46] A Transformer Based Multimodal Fine-Fusion Model for False Information Detection
    Xu, Bai-Ning
    Cao, Yu-Bo
    Meng, Jie
    He, Zi-Jian
    Wang, Li
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE. THEORY AND APPLICATIONS, IEA/AIE 2023, PT I, 2023, 13925 : 271 - 277
  • [47] Distributed Pedestrian Detection Alerts Based on Data Fusion with Accurate Localization
    Garcia, Fernando
    Jimenez, Felipe
    Javier Anaya, Jose
    Maria Armingol, Jose
    Eugenio Naranjo, Jose
    de la Escalera, Arturo
    SENSORS, 2013, 13 (09) : 11687 - 11708
  • [48] A pedestrian detection system based on thermopile and radar sensor data fusion
    Linzmeier, DT
    Skutek, M
    Mekhaiel, M
    Dietmayer, KCJ
    2005 7th International Conference on Information Fusion (FUSION), Vols 1 and 2, 2005, : 1272 - 1279
  • [49] Parallel binocular stereo-vision-based GPU accelerated pedestrian detection and distance computation
    Li, Jiaojiao
    Wu, Jiaji
    You, Yang
    Jeon, Gwanggil
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2020, 17 (03) : 447 - 457
  • [50] Pedestrian detection based on channel feature fusion and enhanced semantic segmentation
    Zong, Xinlu
    Xu, Yuan
    Ye, Zhiwei
    Chen, Zhen
    APPLIED INTELLIGENCE, 2023, 53 (24) : 30203 - 30218