PaDNet: Pan-Density Crowd Counting

被引:77
|
作者
Tian, Yukun [1 ]
Lei, Yiming [1 ]
Zhang, Junping [1 ]
Wang, James Z. [2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200433, Peoples R China
[2] Penn State Univ, Coll Informat Sci & Technol, University Pk, PA 16802 USA
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Crowd counting; density level analysis; pan-density evaluation; convolutional neural networks; PARTIALLY OCCLUDED HUMANS; BAYESIAN COMBINATION; MULTIPLE; IMAGE;
D O I
10.1109/TIP.2019.2952083
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Crowd counting is a highly challenging problem in computer vision and machine learning. Most previous methods have focused on consistent density crowds, i.e., either a sparse or a dense crowd, meaning they performed well in global estimation while neglecting local accuracy. To make crowd counting more useful in the real world, we propose a new perspective, named pan-density crowd counting, which aims to count people in varying density crowds. Specifically, we propose the Pan-Density Network (PaDNet) which is composed of the following critical components. First, the Density-Aware Network (DAN) contains multiple subnetworks pretrained on scenarios with different densities. This module is capable of capturing pan-density information. Second, the Feature Enhancement Layer (FEL) effectively captures the global and local contextual features and generates a weight for each density-specific feature. Third, the Feature Fusion Network (FFN) embeds spatial context and fuses these density-specific features. Further, the metrics Patch MAE (PMAE) and Patch RMSE (PRMSE) are proposed to better evaluate the performance on the global and local estimations. Extensive experiments on four crowd counting benchmark datasets, the ShanghaiTech, the UCFCC50, the UCSD, and the UCF-QNRF, indicate that PaDNet achieves state-of-the-art recognition performance and high robustness in pan-density crowd counting.
引用
收藏
页码:2714 / 2727
页数:14
相关论文
共 50 条
  • [1] Approaches on crowd counting and density estimation: a review
    Bo Li
    Hongbo Huang
    Ang Zhang
    Peiwen Liu
    Cheng Liu
    Pattern Analysis and Applications, 2021, 24 : 853 - 874
  • [2] Crowd Counting and Localization Beyond Density Map
    Khan, Akbar
    Kadir, Kushsairy
    Nasir, Haidawati
    Shah, Jawad Ali
    Albattah, Waleed
    Khan, Sheroz
    Kakakhel, Muhammad Haris
    IEEE ACCESS, 2022, 10 : 133142 - 133151
  • [3] A comprehensive survey of crowd density estimation and counting
    Wang, Mingtao
    Zhou, Xin
    Chen, Yuanyuan
    IET IMAGE PROCESSING, 2025, 19 (01)
  • [4] Approaches on crowd counting and density estimation: a review
    Li, Bo
    Huang, Hongbo
    Zhang, Ang
    Liu, Peiwen
    Liu, Cheng
    PATTERN ANALYSIS AND APPLICATIONS, 2021, 24 (03) : 853 - 874
  • [5] CASCADED RESIDUAL DENSITY NETWORK FOR CROWD COUNTING
    Zhao, Kun
    Liu, Bin
    Song, Luchuan
    Li, Weihai
    Yu, Nenghai
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2199 - 2203
  • [6] Adaptive Density Map Generation for Crowd Counting
    Wan, Jia
    Chan, Antoni
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 1130 - 1139
  • [7] Survey on algorithms of people counting in dense crowd and crowd density estimation
    Yang, Ge
    Zhu, Dian
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (09) : 13637 - 13648
  • [8] Survey on algorithms of people counting in dense crowd and crowd density estimation
    Ge Yang
    Dian Zhu
    Multimedia Tools and Applications, 2023, 82 : 13637 - 13648
  • [9] A crowd counting method via density map and counting residual estimation
    Yang, Li
    Guo, Yanqun
    Sang, Jun
    Wu, Weiqun
    Wu, Zhongyuan
    Liu, Qi
    Xia, Xiaofeng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (30) : 43503 - 43512
  • [10] A crowd counting method via density map and counting residual estimation
    Li Yang
    Yanqun Guo
    Jun Sang
    Weiqun Wu
    Zhongyuan Wu
    Qi Liu
    Xiaofeng Xia
    Multimedia Tools and Applications, 2022, 81 : 43503 - 43512