DMOIT: denoised multi-omics integration approach based on transformer multi-head self-attention mechanism

被引:0
|
作者
Liu, Zhe [1 ]
Park, Taesung [1 ,2 ]
机构
[1] Seoul Natl Univ, Interdisciplinary Program Bioinformat, Seoul, South Korea
[2] Seoul Natl Univ, Dept Stat, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
multi-omics integration; survival time prediction; deep learning; machine learning; multi-head self-attention; CELLS;
D O I
10.3389/fgene.2024.1488683
中图分类号
Q3 [遗传学];
学科分类号
071007 ; 090102 ;
摘要
Multi-omics data integration has become increasingly crucial for a deeper understanding of the complexity of biological systems. However, effectively integrating and analyzing multi-omics data remains challenging due to their heterogeneity and high dimensionality. Existing methods often struggle with noise, redundant features, and the complex interactions between different omics layers, leading to suboptimal performance. Additionally, they face difficulties in adequately capturing intra-omics interactions due to simplistic concatenation techiniques, and they risk losing critical inter-omics interaction information when using hierarchical attention layers. To address these challenges, we propose a novel Denoised Multi-Omics Integration approach that leverages the Transformer multi-head self-attention mechanism (DMOIT). DMOIT consists of three key modules: a generative adversarial imputation network for handling missing values, a sampling-based robust feature selection module to reduce noise and redundant features, and a multi-head self-attention (MHSA) based feature extractor with a noval architecture that enchance the intra-omics interaction capture. We validated model porformance using cancer datasets from the Cancer Genome Atlas (TCGA), conducting two tasks: survival time classification across different cancer types and estrogen receptor status classification for breast cancer. Our results show that DMOIT outperforms traditional machine learning methods and the state-of-the-art integration method MoGCN in terms of accuracy and weighted F1 score. Furthermore, we compared DMOIT with various alternative MHSA-based architectures to further validate our approach. Our results show that DMOIT consistently outperforms these models across various cancer types and different omics combinations. The strong performance and robustness of DMOIT demonstrate its potential as a valuable tool for integrating multi-omics data across various applications.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Efficient Road Traffic Video Congestion Classification Based on the Multi-Head Self-Attention Vision Transformer Model
    Khalladi, Sofiane Abdelkrim
    Ouessai, Asmaa
    Benamara, Nadir Kamel
    Keche, Mokhtar
    TRANSPORT AND TELECOMMUNICATION JOURNAL, 2024, 25 (01) : 20 - 30
  • [32] BMNet: Enhancing Deepfake Detection Through BiLSTM and Multi-Head Self-Attention Mechanism
    Xiong, Demao
    Wen, Zhan
    Zhang, Cheng
    Ren, Dehao
    Li, Wenzao
    IEEE ACCESS, 2025, 13 : 21547 - 21556
  • [33] Multi-head Self-attention Recommendation Model based on Feature Interaction Enhancement
    Yin, Yunfei
    Huang, Caihao
    Sun, Jingqin
    Huang, Faliang
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 1740 - 1745
  • [34] DEDUCE: Multi-head attention decoupled contrastive learning to discover cancer subtypes based on multi-omics data
    Pan, Liangrui
    Wang, Xiang
    Liang, Qingchun
    Shang, Jiandong
    Liu, Wenjuan
    Xu, Liwen
    Peng, Shaoliang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 257
  • [35] Multi-head self-attention mechanism-based global feature learning model for ASD diagnosis
    Zhao, Feng
    Feng, Fan
    Ye, Shixin
    Mao, Yanyan
    Chen, Xiaobo
    Li, Yuan
    Ning, Mao
    Zhang, Mingli
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 91
  • [36] DeFusion: a denoised network regularization framework for multi-omics integration
    Wang, Weiwen
    Zhang, Xiwen
    Dai, Dao-Qing
    BRIEFINGS IN BIOINFORMATICS, 2021, 22 (05)
  • [37] Attention as Relation: Learning Supervised Multi-head Self-Attention for Relation Extraction
    Liu, Jie
    Chen, Shaowei
    Wang, Bingquan
    Zhang, Jiaxin
    Li, Na
    Xu, Tong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3787 - 3793
  • [38] Non-Invasive Load Decomposition Method Based on Multi-Scale TCN and Multi-Head Self-Attention Mechanism
    Zhang, Yan
    Li, Fei
    Xiao, Yang
    Li, Kai
    Xia, Lei
    Tan, Huilei
    INTERNATIONAL JOURNAL OF MULTIPHYSICS, 2024, 18 (03) : 547 - 556
  • [39] PrMFTP: Multi-functional therapeutic peptides prediction based on multi-head self-attention mechanism and class weight optimization
    Yan, Wenhui
    Tang, Wending
    Wang, Lihua
    Bin, Yannan
    Xia, Junfeng
    PLOS COMPUTATIONAL BIOLOGY, 2022, 18 (09)
  • [40] Enlivening Redundant Heads in Multi-head Self-attention for Machine Translation
    Zhang, Tianfu
    Huang, Heyan
    Feng, Chong
    Cao, Longbing
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3238 - 3248