Synergistic attention U-Net for sublingual vein segmentation

被引:2
|
作者
Tingxiao Yang
Yuichiro Yoshimura
Akira Morita
Takao Namiki
Toshiya Nakaguchi
机构
[1] Chiba University,Graduate School of Science and Technology
[2] Chiba University,Center for Frontier Medical Engineering
[3] Chiba University,Graduate School of Medicine
来源
关键词
Tongue; Sublingual veins; Segmentation; Synergistic; Attention; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
The tongue is one of the most sensitive organs of the human body. The changes in the tongue indicate the changes of the human state. One of the features of the tongue, which can be used to inspect the blood circulation of human, is the shape information of the sublingual vein. Therefore, this paper aims to segment the sublingual vein from the RGB images of the tongue. In traditional segmentation network training based on deep learning, the resolution of the input image is generally resized to save training costs. However, the size of the sublingual vein is much smaller than the size of the tongue relative to the entire image. The resized inputs are likely to cause the network to fail to capture target information for the smaller segmentation and produce an “all black” output. This study first pointed out that the training of the segmentation of the sublingual vein compared to the tongue segmentation is much more difficult through a small dataset. At the same time, we also compared the effects of different input sizes on small sublingual segmentation. In response to the problems that arise, we propose a synergistic attention network. By dismembering the entire encoder–decoder framework and updating the parameters synergistically, the proposed network can not only improve the convergence speed of training process, but also avoid the problem of falling into the optimal local solution and maintains the stability of training without increasing the training cost and additional regional auxiliary labels.
引用
收藏
页码:550 / 559
页数:9
相关论文
共 50 条
  • [1] Synergistic attention U-Net for sublingual vein segmentation
    Yang, Tingxiao
    Yoshimura, Yuichiro
    Morita, Akira
    Namiki, Takao
    Nakaguchi, Toshiya
    ARTIFICIAL LIFE AND ROBOTICS, 2019, 24 (04) : 550 - 559
  • [2] Segmentation of Palm Vein Images Using U-Net
    Marattukalam, Felix
    Abdulla, Waleed H.
    2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 64 - 70
  • [3] Attention guided U-Net for accurate iris segmentation
    Lian, Sheng
    Luo, Zhiming
    Zhong, Zhun
    Lin, Xiang
    Su, Songzhi
    Li, Shaozi
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 56 : 296 - 304
  • [4] Multiscale Attention U-Net for Skin Lesion Segmentation
    Alahmadi, Mohammad D.
    IEEE ACCESS, 2022, 10 : 59145 - 59154
  • [5] U-Net with Attention Mechanism for Retinal Vessel Segmentation
    Si, Ze
    Fu, Dongmei
    Li, Jiahao
    IMAGE AND GRAPHICS, ICIG 2019, PT II, 2019, 11902 : 668 - 677
  • [6] Dual Encoder Attention U-net for nuclei segmentation
    Vahadane, Abhishek
    Atheeth, B.
    Majumdar, Shantanu
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 3205 - 3208
  • [7] AttU-NET: Attention U-Net for Brain Tumor Segmentation
    Wang, Sihan
    Li, Lei
    Zhuang, Xiahai
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2021, PT II, 2022, 12963 : 302 - 311
  • [8] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Kumar T. Rajamani
    Priya Rani
    Hanna Siebert
    Rajkumar ElagiriRamalingam
    Mattias P. Heinrich
    Signal, Image and Video Processing, 2023, 17 : 981 - 989
  • [9] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Rajamani, Kumar T.
    Rani, Priya
    Siebert, Hanna
    ElagiriRamalingam, Rajkumar
    Heinrich, Mattias P.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 981 - 989
  • [10] Automated seismic semantic segmentation using attention U-Net
    Alsalmi, Haifa
    Elsheikh, Ahmed H.
    GEOPHYSICS, 2024, 89 (01) : WA247 - WA263