ENHANCING TONGUE REGION SEGMENTATION THROUGH SELF-ATTENTION AND TRANSFORMER BASED

被引:0
|
作者
Song, Yihua [1 ,2 ]
Li, Can [1 ,2 ]
Zhang, Xia [1 ,2 ]
Liu, Zhen [3 ]
Song, Ningning [4 ]
Zhou, Zuojian [1 ,2 ]
机构
[1] Nanjing Univ Chinese Med, Sch Articial Intelligence & Informat Technol, Nanjing 210003, Peoples R China
[2] Nanjing Univ Chinese Med, Jiangsu Prov Engn Res Ctr TCM Intelligence Hlth Se, Nanjing, Peoples R China
[3] Nanjing Univ Chinese Med, Sch Med Humanities, Nanjing 210003, Peoples R China
[4] Nanjing First Hosp, Nanjing 210003, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Deep learning; transformer; harnessing self-attention tongue segmentation; tongue segmentation;
D O I
10.1142/S0219519424400098
中图分类号
Q6 [生物物理学];
学科分类号
071011 ;
摘要
As an essential component of traditional Chinese medicine diagnosis, tongue diagnosis has faced limitations in clinical practice due to its subjectivity and reliance on the experience of physicians. Recent advancements in deep learning techniques have opened new possibilities for the automated analysis and diagnosis of tongue images. In this paper, we collected 500 tongue images from various patients. These images were initially preprocessed and annotated, resulting in the dataset used for this experiment. This project is based on the previously proposed segmentation method using Harnessing Self-Attention and Transformer, which is divided into three key stages: feature extraction, feature fusion, and segmentation prediction. By organically combining these three key stages, our tongue region segmentation model is better equipped to handle complex tongue images and provides accurate segmentation results. The segmentation DICE coefficient reaches 0.953, which is of significant importance for the automation and objectivity of tongue diagnosis in traditional Chinese medicine.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] GSAC-UFormer: Groupwise Self-Attention Convolutional Transformer-Based UNet for Medical Image Segmentation
    Garbaz, Anass
    Oukdach, Yassine
    Charfi, Said
    El Ansari, Mohamed
    Koutti, Lahcen
    Salihoun, Mouna
    COGNITIVE COMPUTATION, 2025, 17 (02)
  • [22] SGSAFormer: Spike Gated Self-Attention Transformer and Temporal Attention
    Gao, Shouwei
    Qin, Yu
    Zhu, Ruixin
    Zhao, Zirui
    Zhou, Hao
    Zhu, Zihao
    ELECTRONICS, 2025, 14 (01):
  • [23] Re-Transformer: A Self-Attention Based Model for Machine Translation
    Liu, Huey-Ing
    Chen, Wei-Lin
    AI IN COMPUTATIONAL LINGUISTICS, 2021, 189 : 3 - 10
  • [24] MULTI-VIEW SELF-ATTENTION BASED TRANSFORMER FOR SPEAKER RECOGNITION
    Wang, Rui
    Ao, Junyi
    Zhou, Long
    Liu, Shujie
    Wei, Zhihua
    Ko, Tom
    Li, Qing
    Zhang, Yu
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6732 - 6736
  • [25] Global-Local Self-Attention Based Transformer for Speaker Verification
    Xie, Fei
    Zhang, Dalong
    Liu, Chengming
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [26] Lightweight Self-Attention Network for Semantic Segmentation
    Zhou, Yan
    Zhou, Haibin
    Li, Nanjun
    Li, Jianxun
    Wang, Dongli
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [27] LoGo Transformer: Hierarchy Lightweight Full Self-Attention Network for Corneal Endothelial Cell Segmentation
    Zhang, Yinglin
    Cai, Zichao
    Higashita, Risa
    Liu, Jiang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [28] Permutation invariant self-attention infused U-shaped transformer for medical image segmentation
    Patil, Sanjeet S.
    Ramteke, Manojkumar
    Rathore, Anurag S.
    NEUROCOMPUTING, 2025, 625
  • [29] FsaNet: Frequency Self-Attention for Semantic Segmentation
    Zhang, Fengyu
    Panahi, Ashkan
    Gao, Guangjun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4757 - 4772
  • [30] Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
    Pan, Xuran
    Ye, Tianzhu
    Xia, Zhuofan
    Song, Shiji
    Huang, Gao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2082 - 2091