Butterfly network: a convolutional neural network with a new architecture for multi-scale semantic segmentation of pedestrians

被引:0
|
作者
Alavianmehr, M. A. [1 ]
Helfroush, M. S. [1 ]
Danyali, H. [1 ]
Tashk, A. [2 ]
机构
[1] Shiraz Univ Technol, Dept Elect Engn, Shiraz, Iran
[2] Univ Southern Denmark SDU, Maersk Mc Kinney Moller Inst MMMI, Odense, Denmark
关键词
Butterfly network (BF-Net); Convolutional neural network; Pedestrian detection; Semantic segmentation; State-of-the-art U-Nets; OBJECT DETECTION;
D O I
10.1007/s11554-023-01273-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The detection of multi-scale pedestrians is one of the challenging tasks in pedestrian detection applications. Moreover, the task of small-scale pedestrian detection, i.e., accurate localization of pedestrians as low-scale target objects, can help solve the issue of occluded pedestrian detection as well. In this paper, we present a fully convolutional neural network with a new architecture and an innovative, fully detailed supervision for semantic segmentation of pedestrians. The proposed network has been named butterfly network (BF-Net) because of its architecture analogous to a butterfly. The proposed BF-Net preserves the ability of simplicity so that it can process static images with a real-time image processing rate. The sub-path blocks embedded in the architecture of the proposed BF-Net provides a higher accuracy for detecting multi-scale objective targets including the small ones. The other advantage of the proposed architecture is replacing common batch normalization with conditional one. In conclusion, the experimental results of the proposed method demonstrate that the proposed network outperform the other state-of-the-art networks such as U-Net + + , U-Net3 + , Mask-RCNN, and Deeplabv3 + for the semantic segmentation of the pedestrians.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Multi-scale face detection based on convolutional neural network
    Luo, Mingzhu
    Xiao, Yewei
    Zhou, Yan
    2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 1752 - 1757
  • [42] A Deep Multi-scale Convolutional Neural Network for Classifying Heartbeats
    Bai, Mengyao
    Xu, Yongjun
    Wang, Lianyan
    Wei, Zhihui
    2018 11TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2018), 2018,
  • [43] Multi-scale boundary neural network for gastric tumor segmentation
    Wang, Pengfei
    Li, Yunqi
    Sun, Yaru
    He, Dongzhi
    Wang, Zhiqiang
    VISUAL COMPUTER, 2023, 39 (03): : 915 - 926
  • [44] Multi-Scale Deep Neural Network Microscopic Image Segmentation
    Wu, Xundong
    Wu, Yong
    Stefani, Enrico
    BIOPHYSICAL JOURNAL, 2015, 108 (02) : 473A - 473A
  • [45] Multi-scale boundary neural network for gastric tumor segmentation
    Pengfei Wang
    Yunqi Li
    Yaru Sun
    Dongzhi He
    Zhiqiang Wang
    The Visual Computer, 2023, 39 : 915 - 926
  • [46] MFPNet: A Multi-scale Feature Propagation Network for Lightweight Semantic Segmentation
    Xu, Guoan
    Jia, Wenjing
    Wu, Tao
    Chen, Ligeng
    Gao, Guangwei
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT III, 2024, 15018 : 76 - 86
  • [47] Multi-Scale Feature Aggregation Network for Semantic Segmentation of Land Cover
    Shen, Xu
    Weng, Liguo
    Xia, Min
    Lin, Haifeng
    REMOTE SENSING, 2022, 14 (23)
  • [48] Enhanced multi-scale feature adaptive fusion sparse convolutional network for large-scale scenes semantic segmentation☆
    Shen, Lingfeng
    Cao, Yanlong
    Zhu, Wenbin
    Ren, Kai
    Shou, Yejun
    Wang, Haocheng
    Xu, Zhijie
    COMPUTERS & GRAPHICS-UK, 2025, 126
  • [49] Semantic image segmentation using fully convolutional neural networks with multi-scale images and multi-scale dilated convolutions
    Duc My Vo
    Sang-Woong Lee
    Multimedia Tools and Applications, 2018, 77 : 18689 - 18707
  • [50] Semantic image segmentation using fully convolutional neural networks with multi-scale images and multi-scale dilated convolutions
    Duc My Vo
    Lee, Sang-Woong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (14) : 18689 - 18707