Attention-injective scale aggregation network for crowd counting

被引:1
|
作者
Zou, Haojie [1 ]
Kuang, Yingchun [1 ]
Luo, Jianqiang [2 ]
Yao, Mingwei [1 ]
Zhou, Haoyu [1 ]
Yang, Sha [1 ]
机构
[1] Hunan Agr Univ, Coll Informat & Intelligent Sci & Technol, Changsha, Peoples R China
[2] Hunan Prov Nat Resources Affairs Ctr, Changsha, Peoples R China
关键词
crowd counting; convolutional neural network; attention mechanism; multi-scale feature;
D O I
10.1117/1.JEI.33.5.053008
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Crowd counting has gained widespread attention in the fields of public safety management, video surveillance, and emergency response. Currently, background interference and scale variation of the head are still intractable problems. We propose an attention-injective scale aggregation network (ASANet) to cope with the above problems. ASANet consists of three parts: shallow feature attention network (SFAN), multi-level feature aggregation (MLFA) module, and density map generation (DMG) network. SFAN effectively overcomes the noise impact of a cluttered background by cross-injecting the attention module in the truncated VGG16 structure. To fully utilize the multi-scale crowd information embedded in the feature layers at different positions, we densely connect the multi-layer feature maps in the MLFA module to solve the scale variation problem. In addition, to capture large-scale head information, the DMG network introduces successive dilated convolutional layers to further expand the receptive field of the model, thus improving the accuracy of crowd counting. We conduct extensive experiments on five public datasets (ShanghaiTech Part_A, ShanghaiTech Part_B, UCF_QNRF, UCF_CC_50, JHU-Crowd++), and the results show that ASANet outperforms most of the existing methods in terms of counting and at the same time demonstrates satisfactory superiority in dealing with background noise in different scenes. (c) 2024 SPIE and IS&T
引用
收藏
页数:20
相关论文
共 50 条
  • [1] ADCrowdNet: An Attention-Injective Deformable Convolutional Network for Crowd Understanding
    Liu, Ning
    Long, Yongchao
    Zou, Changqing
    Niu, Qun
    Pan, Li
    Wu, Hefeng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3220 - 3229
  • [2] Hierarchical feature aggregation network with semantic attention for counting large-scale crowd
    Meng, Chen
    Kang, Chunmeng
    Lyu, Lei
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (11) : 9957 - 9981
  • [3] Multi-Scale Context Aggregation Network with Attention-Guided for Crowd Counting
    Wang, Xin
    Lv, Rongrong
    Zhao, Yang
    Yang, Tangwen
    Ruan, Qiuqi
    PROCEEDINGS OF 2020 IEEE 15TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP 2020), 2020, : 240 - 245
  • [4] Scale Aggregation Network for Accurate and Efficient Crowd Counting
    Cao, Xinkun
    Wang, Zhipeng
    Zhao, Yanyun
    Su, Fei
    COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 : 757 - 773
  • [5] STOCHASTIC MULTI-SCALE AGGREGATION NETWORK FOR CROWD COUNTING
    Wang, Mingjie
    Cai, Hao
    Zhou, Jun
    Gong, Minglun
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 2008 - 2012
  • [6] Multi-Scale Guided Attention Network for Crowd Counting
    Li, Pengfei
    Zhang, Min
    Wan, Jian
    Jiang, Ming
    SCIENTIFIC PROGRAMMING, 2021, 2021
  • [7] Multi-scale Attention Recalibration Network for crowd counting
    Xie, Jinyang
    Pang, Chen
    Zheng, Yanjun
    Li, Liang
    Lyu, Chen
    Lyu, Lei
    Liu, Hong
    APPLIED SOFT COMPUTING, 2022, 117
  • [8] Domain adaptive crowd counting via dynamic scale aggregation network
    Huo, Zhanqiang
    Wang, Yanan
    Qiao, Yingxu
    Wang, Jing
    Luo, Fen
    IET COMPUTER VISION, 2023, 17 (07) : 814 - 828
  • [9] Jointly attention network for crowd counting
    He, Yuqiang
    Xia, Yinfeng
    Wang, Yizhen
    Yin, Baoqun
    NEUROCOMPUTING, 2022, 487 : 157 - 171
  • [10] Convolutional Attention Network for Crowd Counting
    Zhu, Yubin
    Li, Wengen
    Guan, Jihong
    Zhang, Yichao
    Computer Engineering and Applications, 2023, 59 (01) : 156 - 161