SepFE: Separable Fusion Enhanced Network for Retinal Vessel Segmentation

被引:1
|
作者
Wu, Yun [1 ]
Jiao, Ge [1 ,2 ]
Liu, Jiahao [1 ]
机构
[1] Hengyang Normal Univ, Coll Comp Sci & Technol, Hengyang 421002, Peoples R China
[2] Hunan Prov Key Lab Intelligent Informat Proc & App, Hengyang 421002, Peoples R China
来源
关键词
Retinal vessel segmentation; U-Net; depth-wise separable convolution; feature fusion; U-NET; FUNDUS; ARCHITECTURE; IMAGES;
D O I
10.32604/cmes.2023.026189
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The accurate and automatic segmentation of retinal vessels from fundus images is critical for the early diagnosis and prevention of many eye diseases, such as diabetic retinopathy (DR). Existing retinal vessel segmentation approaches based on convolutional neural networks (CNNs) have achieved remarkable effectiveness. Here, we extend a retinal vessel segmentation model with low complexity and high performance based on U-Net, which is one of the most popular architectures. In view of the excellent work of depth-wise separable convolution, we introduce it to replace the standard convolutional layer. The complexity of the proposed model is reduced by decreasing the number of parameters and calculations required for the model. To ensure performance while lowering redundant parameters, we integrate the pre-trained MobileNet V2 into the encoder. Then, a feature fusion residual module (FFRM) is designed to facilitate complementary strengths by enhancing the effective fusion between adjacent levels, which alleviates extraneous clutter introduced by direct fusion. Finally, we provide detailed comparisons between the proposed SepFE and U-Net in three retinal image mainstream datasets (DRIVE, STARE, and CHASEDB1). The results show that the number of SepFE parameters is only 3% of U-Net, the Flops are only 8% of U-Net, and better segmentation performance is obtained. The superiority of SepFE is further demonstrated through comparisons with other advanced methods.
引用
收藏
页码:2465 / 2485
页数:21
相关论文
共 50 条
  • [21] MAG-Net: Multi-fusion network with grouped attention for retinal vessel segmentation
    Jiang Y.
    Chen J.
    Yan W.
    Zhang Z.
    Qiao H.
    Wang M.
    Mathematical Biosciences and Engineering, 2024, 21 (02) : 1938 - 1958
  • [22] DPF-Net: A Dual-Path Progressive Fusion Network for Retinal Vessel Segmentation
    Li, Jianyong
    Gao, Ge
    Yang, Lei
    Bian, Guibin
    Liu, Yanhong
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [23] Self-Paced Dual-Axis Attention Fusion Network for Retinal Vessel Segmentation
    Shi, Yueting
    Wang, Weijiang
    Yuan, Minzhi
    Wang, Xiaohua
    ELECTRONICS, 2023, 12 (09)
  • [24] DEF-Net: A Dual-Encoder Fusion Network for Fundus Retinal Vessel Segmentation
    Li, Jianyong
    Gao, Ge
    Yang, Lei
    Liu, Yanhong
    Yu, Hongnian
    ELECTRONICS, 2022, 11 (22)
  • [25] MFI-Net: A multi-resolution fusion input network for retinal vessel segmentation
    Jiang, Yun
    Wu, Chao
    Wang, Ge
    Yao, Hui-Xia
    Liu, Wen-Huan
    PLOS ONE, 2021, 16 (07):
  • [26] A Retinal Vessel Segmentation Network With Dual-Stage Network and Vessel Pixel Emendation
    Liu, Yanhong
    Shen, Ji
    Zhai, Chenxu
    Yang, Lei
    Bian, Guibin
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [27] Cascaded Attention Guided Network for Retinal Vessel Segmentation
    Li, Mingxing
    Zhang, Yueyi
    Xiong, Zhiwei
    Liu, Dong
    OPHTHALMIC MEDICAL IMAGE ANALYSIS, OMIA 2020, 2020, 12069 : 62 - 71
  • [28] Retinal Vessel Segmentation Method with Efficient Hybrid Features Fusion
    Cai Y.
    Gao X.
    Qiu C.
    Cui Y.
    Cai, Yiheng (caiyiheng@bjut.edu.cn), 1956, Science Press (39): : 1956 - 1963
  • [29] Orientation and Context Entangled Network for Retinal Vessel Segmentation
    Wei, Xinxu
    Yang, Kaifu
    Bzdok, Danilo
    Li, Yongjie
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 217
  • [30] Multiscale Dense Attention Network for Retinal Vessel Segmentation
    Liang Liming
    Yu Jie
    Zhou Longsong
    Chen Xin
    Wu Jian
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (06)