PDANet: Pyramid density-aware attention based network for accurate crowd counting

被引:16
|
作者
Amirgholipour, Saeed [1 ,2 ]
Jia, Wenjing [1 ]
Liu, Lei [3 ]
Fan, Xiaochen [1 ]
Wang, Dadong [2 ]
He, Xiangjian [1 ]
机构
[1] Univ Technol Sydney, Sch Elect & Data Engn, Sydney, NSW, Australia
[2] CSIRO, Quantitat Imaging Res Team, Data61, Sydney, NSW, Australia
[3] Beihang Univ, Sch Instrumentat Sci & Optoelect Engn, Beijing, Peoples R China
关键词
Crowd counting; Pyramid module; Density ware; Attention module; Classification module; Convolutional neural networks;
D O I
10.1016/j.neucom.2021.04.037
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Crowd counting, i.e., estimating the number of people in crowded areas, has attracted much interest in the research community. Although many attempts have been reported, crowd counting remains an open real-world problem due to the vast density variations and severe occlusion within the interested crowd area. In this paper, we propose a novel Pyramid Density-Aware Attention based network, abbreviated as PDANet, which leverages the attention, pyramid scale feature, and two branch decoder modules for density-aware crowd counting. The PDANet utilizes these modules to extract features of different scales while focusing on the relevant information and suppressing the misleading information. We also address the variation of crowdedness levels among different images with a Density-Aware Decoder (DAD) modules. For this purpose, a classifier is constructed to evaluate the density level of the input features and then passes them to the corresponding high and low density DAD modules. Finally, we generate an overall density map by considering the summation of low and high crowdedness density maps. Meanwhile, we employ different losses aiming to achieve a precise density map for the input scene. Extensive evaluations conducted on the challenging benchmark datasets well demonstrate the superior performance of the proposed PDANet in terms of the accuracy of counting and generated density maps over the wellknown state-of-the-art approaches. (c) 2021 Published by Elsevier B.V.
引用
收藏
页码:215 / 230
页数:16
相关论文
共 50 条
  • [1] Context-aware pyramid attention network for crowd counting
    Gu, Lingyu
    Pang, Chen
    Zheng, Yanjun
    Lyu, Chen
    Lyu, Lei
    APPLIED INTELLIGENCE, 2022, 52 (06) : 6164 - 6180
  • [2] Density-Aware Curriculum Learning for Crowd Counting
    Wang, Qi
    Lin, Wei
    Gao, Junyu
    Li, Xuelong
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (06) : 4675 - 4687
  • [3] Context-aware pyramid attention network for crowd counting
    Lingyu Gu
    Chen Pang
    Yanjun Zheng
    Chen Lyu
    Lei Lyu
    Applied Intelligence, 2022, 52 : 6164 - 6180
  • [4] Density-aware and background-aware network for crowd counting via multi-task learning
    Sang, Jun (jsang@cqu.edu.cn), 1600, Elsevier B.V. (150):
  • [5] Density-Aware Multi-Task Learning for Crowd Counting
    Jiang, Xiaoheng
    Zhang, Li
    Zhang, Tianzhu
    Lv, Pei
    Zhou, Bing
    Pang, Yanwei
    Xu, Mingliang
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 443 - 453
  • [6] Density-aware and background-aware network for crowd counting via multi-task learning
    Liu, Xinyue
    Sang, Jun
    Wu, Weiqun
    Liu, Kai
    Liu, Qi
    Xia, Xiaofeng
    PATTERN RECOGNITION LETTERS, 2021, 150 : 221 - 227
  • [7] Attention guided feature pyramid network for crowd counting
    Chu, Huanpeng
    Tang, Jilin
    Hu, Haoji
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 80
  • [8] FPANet: feature pyramid attention network for crowd counting
    Zhai, Wenzhe
    Gao, Mingliang
    Li, Qilei
    Jeon, Gwanggil
    Anisetti, Marco
    APPLIED INTELLIGENCE, 2023, 53 (16) : 19199 - 19216
  • [9] FPANet: feature pyramid attention network for crowd counting
    Wenzhe Zhai
    Mingliang Gao
    Qilei Li
    Gwanggil Jeon
    Marco Anisetti
    Applied Intelligence, 2023, 53 : 19199 - 19216
  • [10] SFPANet: Separation and fusion pyramid attention network for crowd counting
    Li Yan Xiong
    Huizi Deng
    Hu Yi
    Peng Huang
    Qiyun Zhou
    Multimedia Tools and Applications, 2024, 83 : 38839 - 38855