Semantic Segmentation of Aerial Imagery Using U-Net with Self-Attention and Separable Convolutions

被引:1
|
作者
Khan, Bakht Alam [1 ]
Jung, Jin-Woo [1 ]
机构
[1] Dongguk Univ, Dept Comp Sci & Engn, Seoul 04620, South Korea
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 09期
关键词
semantic segmentation; U-Net; self-attention; separable convolutions; aerial imagery; remote sensing; RESOLUTION; SATELLITE; NETWORK;
D O I
10.3390/app14093712
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This research addresses the crucial task of improving accuracy in the semantic segmentation of aerial imagery, essential for applications such as urban planning and environmental monitoring. This study emphasizes the significance of maintaining the Intersection over Union (IOU) score as a metric and employs data augmentation with the Patchify library, using a patch size of 256, to effectively augment the dataset, which is subsequently split into training and testing sets. The core of this investigation lies in a novel architecture that combines a U-Net framework with self-attention mechanisms and separable convolutions. The introduction of self-attention mechanisms enhances the model's understanding of image context, while separable convolutions expedite the training process, contributing to overall efficiency. The proposed model demonstrates a substantial accuracy improvement, surpassing the previous state-of-the-art Dense Plus U-Net, achieving an accuracy of 91% compared to the former's 86%. Visual representations, including original patch images, original masked patches, and predicted patch masks, showcase the model's proficiency in semantic segmentation, marking a significant advancement in aerial image analysis and underscoring the importance of innovative architectural elements for enhanced accuracy and efficiency in such tasks.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] S3AR U-Net: A separable squeezed similarity attention-gated residual U-Net for glottis segmentation
    Montalbo, Francis Jesmar P.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 92
  • [22] RAUNet: Residual Attention U-Net for Semantic Segmentation of Cataract Surgical Instruments
    Ni, Zhen-Liang
    Bian, Gui-Bin
    Zhou, Xiao-Hu
    Hou, Zeng-Guang
    Xie, Xiao-Liang
    Wang, Chen
    Zhou, Yan-Jie
    Li, Rui-Qi
    Li, Zhen
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT II, 2019, 11954 : 139 - 149
  • [23] Depth Estimation Using Feature Pyramid U-Net and Polarized Self-Attention for Road Scenes
    Tao, Bo
    Shen, Yunfei
    Tong, Xiliang
    Jiang, Du
    Chen, Baojia
    PHOTONICS, 2022, 9 (07)
  • [24] Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules
    Rong, YiBiao
    Xiong, Yu
    Li, Chong
    Chen, Ying
    Wei, Peiwei
    Wei, Chuliang
    Fan, Zhun
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2023, 61 (07) : 1745 - 1755
  • [25] Monaural speech enhancement using U-net fused with multi-head self-attention
    FAN Junyi
    YANG Jibin
    ZHANG Xiongwei
    ZHENG Changyan
    Chinese Journal of Acoustics, 2023, 42 (01) : 98 - 118
  • [26] Monaural speech enhancement using U-net fused with multi-head self-attention
    Fan, Junyi
    Yang, Jibin
    Zhang, Xiongwei
    Zheng, Changyan
    Shengxue Xuebao/Acta Acustica, 2022, 47 (06): : 703 - 716
  • [27] TransAttention U-Net for Semantic Segmentation of Poppy
    Luo, Zifei
    Yang, Wenzhu
    Gou, Ruru
    Yuan, Yunfeng
    ELECTRONICS, 2023, 12 (03)
  • [28] Improved U-NET Semantic Segmentation Network
    Gao, Xueyan
    Fang, Lijin
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 7090 - 7095
  • [29] Allergy Wheal and Erythema Segmentation Using Attention U-Net
    Lee, Yul Hee
    Shim, Ji-Su
    Kim, Young Jae
    Jeon, Ji Soo
    Kang, Sung-Yoon
    Lee, Sang Pyo
    Lee, Sang Min
    Kim, Kwang Gi
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2025, 38 (01): : 467 - 475
  • [30] Generalization of U-Net Semantic Segmentation for Forest Change Detection in South Korea Using Airborne Imagery
    Pyo, JongCheol
    Han, Kuk-jin
    Cho, Yoonrang
    Kim, Doyeon
    Jin, Daeyong
    FORESTS, 2022, 13 (12):