A Pedestrian Detection Model Based on Binocular Information Fusion

被引:2
|
作者
Zhang, Juan [1 ]
Ma, Zhonggui [1 ]
Nuermaimaiti, Nuerxiati [1 ]
机构
[1] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing, Peoples R China
关键词
Pedestrian detection; deep learning; binocular vision; PSMNet; Faster R-CNN;
D O I
10.1109/wocc.2019.8770601
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Pedestrian detection, as a special kind of target detection, is a research hotspot in the field of image processing and computer vision. Because monocular vision cannot obtain the depth information of the image, it cannot meet the accuracy requirements of pedestrian detection. In order to solve these problems, a new cascading pedestrian detection model based on PSMNet binocular information fusion and improved faster R-CNN pedestrian detection model is proposed. Firstly, binocular images are fed into the original PSMNet binocular information fusion module to get the disparity map, and then left and right images are fused by the disparity map to get the fusion image. Secondly, in the improved faster R-CNN pedestrian detection module, the left, right and fusion image of one frame are as separate inputs, and pedestrian detection is carried out respectively. Finally, the detection results of the three channels are passed through the target consistency validation module, and the verified pedestrian detection target is as the final output detection result. The simulation results show that the accuracy and recall rate of the cascading model are improved, the missed detection rate is reduced to 13.42%, and the accuracy rate is 88.58%.
引用
收藏
页码:13 / 17
页数:5
相关论文
共 50 条
  • [21] Cross-modality complementary information fusion for multispectral pedestrian detection
    Chaoqi Yan
    Hong Zhang
    Xuliang Li
    Yifan Yang
    Ding Yuan
    Neural Computing and Applications, 2023, 35 : 10361 - 10386
  • [22] Research on Distributed Intrusion Detection Model Based on Information Fusion
    Ping, Du
    Wei, Xu
    NANOTECHNOLOGY AND COMPUTER ENGINEERING, 2010, 121-122 : 528 - 533
  • [23] Cross-modality complementary information fusion for multispectral pedestrian detection
    Yan, Chaoqi
    Zhang, Hong
    Li, Xuliang
    Yang, Yifan
    Yuan, Ding
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 10361 - 10386
  • [24] Classifiers fusion of pedestrian detection based on Choquet integral
    Guan, Wang
    Yang, Rong
    Wang, Yun
    Xie, Zijie
    Niu, Junyu
    2020 IEEE INTERNATIONAL CONFERENCE ON REAL-TIME COMPUTING AND ROBOTICS (IEEE-RCAR 2020), 2020, : 50 - 55
  • [25] Smart Car Front Pedestrian Detection Based on Binocular Cameras Stereo Vision
    Li, Xin-Yan
    INTERNATIONAL CONFERENCE ON ADVANCED MANUFACTURE TECHNOLOGY AND INDUSTRIAL APPLICATION, AMTIA 2016, 2016, : 218 - 222
  • [26] A pedestrian detection method under time-space data fusion based on laser and video information
    Zhang, Rong-Hui
    Li, Fu-Liang
    Zhou, Xi
    Jiang, Tong-Hai
    You, Feng
    Xu, Jian-Min
    Yang, San-Qiang
    Jiaotong Yunshu Xitong Gongcheng Yu Xinxi/Journal of Transportation Systems Engineering and Information Technology, 2015, 15 (03): : 49 - 55
  • [27] Pedestrian Detection based on Appearance, Motion, and Shadow Information
    Wang, Junqiu
    Yagi, Yasushi
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 750 - 755
  • [28] Multi-feature Fusion Pedestrian Detection Combining Head and Overall Information
    Chen Yong
    Xie Wenyang
    Liu Huanlin
    Wang Bo
    Huang Meiyong
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (04) : 1453 - 1460
  • [29] Multiscale Information Fusion Based on Large Model Inspired Bacterial Detection
    Liu, Zongduo
    Huang, Yan
    Wang, Jian
    Yuan, Genji
    Pang, Junjie
    BIG DATA MINING AND ANALYTICS, 2025, 8 (01): : 1 - 17
  • [30] Obstacle Detection Model Implementation Based on Information Fusion of Ultrasonic and Vision
    Wang, Jimin
    Chen, Qijun
    2016 IEEE INTERNATIONAL CONFERENCE ON REAL-TIME COMPUTING AND ROBOTICS (IEEE RCAR), 2016, : 216 - 220