CSFNet: A novel counting network based on context features and multi-scale information

被引:0
|
作者
Xiong, Liyan [1 ]
Li, Zhida [1 ]
Huang, Xiaohui [1 ]
Wang, Heng [1 ]
机构
[1] East China Jiaotong Univ, Sch Informat & Software Engn, Nanchang 330013, Peoples R China
基金
中国国家自然科学基金;
关键词
Attentional mechanisms; Crowd counting; Multiscale features; Sheltering phenomenon; CROWD; SCALE;
D O I
10.1007/s00530-024-01603-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The goal of crowd-counting techniques is to estimate the number of people in an image or video in real-time and accurately. In recent years, with the development of deep learning, the accuracy of the crowd-counting task has improved. However, the accuracy of the crowd-counting task in crowded scenes with large-scale variations still needs improvement. To address this situation, this paper proposes a novel crowd-counting network: Context-Scaled Fusion Network (CSFNet). The details include: (1) the design of the Multi-Scale Receptive Field Fusion Module (MRFF Module), which employs multiple dilated convolutional layers with different dilation rates and uses a fusion mechanism to obtain multi-scale hybrid information to generate higher quality feature maps; (2) the proposal of the Contextual Space Attention Module (CSA Module), which can obtain pixel-level contextual information and combine it with the attention map to enable the model to autonomously learn and focus on important regions, thereby achieving a reduction in counting error. In this paper, the model is trained and evaluated on five datasets: ShanghaiTech, UCF_CC_50, WorldExpo'10, BEIJING-BRT, and Mall. The experimental results show that CSFNet outperforms many state-of-the-art (SOTA) methods on these datasets, demonstrating its superior counting ability and robustness.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] An Adaptive Multi-Scale Network Based on Depth Information for Crowd Counting
    Zhang, Peng
    Lei, Weimin
    Zhao, Xinlei
    Dong, Lijia
    Lin, Zhaonan
    SENSORS, 2023, 23 (18)
  • [2] Dense Crowd Counting Network Based on Multi-scale Perception
    Li, Hengchao
    Liu, Xianglian
    Liu, Peng
    Feng, Bin
    Xinan Jiaotong Daxue Xuebao/Journal of Southwest Jiaotong University, 2024, 59 (05): : 1176 - 1183
  • [3] Crowd Counting Method Based on Multi-Scale Enhanced Network
    Xu Tao
    Duan Yinong
    Du Jiahao
    Liu Caihua
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (06) : 1764 - 1771
  • [4] Context-Aware Multi-Scale Aggregation Network for Congested Crowd Counting
    Huang, Liangjun
    Shen, Shihui
    Zhu, Luning
    Shi, Qingxuan
    Zhang, Jianwei
    SENSORS, 2022, 22 (09)
  • [5] Multi-Scale Context Aggregation Network with Attention-Guided for Crowd Counting
    Wang, Xin
    Lv, Rongrong
    Zhao, Yang
    Yang, Tangwen
    Ruan, Qiuqi
    PROCEEDINGS OF 2020 IEEE 15TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP 2020), 2020, : 240 - 245
  • [6] MSNet: Multi-scale Network for Crowd Counting
    Shi, Ying
    Sang, Jun
    Alam, Mohammad S.
    Liu, Xinyue
    Tian, Shaoli
    PATTERN RECOGNITION AND TRACKING XXXII, 2021, 11735
  • [7] Multi-scale supervised network for crowd counting
    Wang, Yongjie
    Zhang, Wei
    Huang, Dongxiao
    Liu, Yanyan
    Zhu, Jianghua
    IET IMAGE PROCESSING, 2020, 14 (17) : 4701 - 4707
  • [8] Multi-scale features fused network with multi-level supervised path for crowd counting
    Wang, Yongjie
    Zhang, Wei
    Huang, Dongxiao
    Liu, Yanyan
    Zhu, Jianghua
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 200
  • [9] Leaf Counting with Multi-Scale Convolutional Neural Network Features and Fisher Vector Coding
    Jiang, Boran
    Wang, Ping
    Zhuang, Shuo
    Li, Maosong
    Li, Zhenfa
    Gong, Zhihong
    SYMMETRY-BASEL, 2019, 11 (04):
  • [10] Multi-Level Medical Image Segmentation Network Based on Multi-Scale and Context Information Fusion Strategy
    Tan, Dayu
    Yao, Zhiyuan
    Peng, Xin
    Ma, Haiping
    Dai, Yike
    Su, Yansen
    Zhong, Weimin
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 474 - 487