Global and cross-modal feature aggregation for multi-omics data classification and on

被引:10
|
作者
Zheng, Xiao [1 ]
Wang, Minhui [2 ]
Huang, Kai [3 ]
Zhu, En [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp, Changsha 410073, Peoples R China
[2] Nanjing Med Univ, Kangda Coll, Lianshui Peoples Hosp, Dept Pharm, Huaian 223300, Peoples R China
[3] Huazhong Univ Sci & Technol, Union Hosp, Tongji Med Coll, Clin Ctr Human Gene Res, Wuhan 430030, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-omics data classification; Multi-modal learning; Cross-modal fusion; Contrastive learning; NETWORK; FUSION; GRAPH; MULTIMODALITY; PREDICTION; BIOLOGY;
D O I
10.1016/j.inffus.2023.102077
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With rapid development of single-cell multi-modal sequencing technologies, more and more multi-omics data come into being and provide a unique opportunity for the identification of distinct cell types at the single-cell level. Therefore, it is important to integrate different modalities which are with high-dimensional features for boosting final multi-omics data classification performance. However, existing multi-omics data classification methods mainly focus on exploiting the complementary information of different modalities, while ignoring the learning confidence and cross-modal sample relationship during information fusion. In this paper, we propose a multi-omics data classification network via global and cross-modal feature aggregation, referred to as GCFANet. On one hand, considering that a large number of feature dimensions in different modalities could not contribute to final classification performance but disturb the discriminability of different samples, we propose a feature confidence learning mechanism to suppress some redundant features, as well as enhancing the expression of discriminative feature dimensions in each modality. On the other hand, in order to capture the inherent sample structure information implied in each modality, we design a graph convolutional network branch to learn the corresponding structure preserved feature representation. Then the modal-specific feature representations are concatenated and input to a transformer induced global and cross-modal feature aggregation module for learning consensus feature representation from different modalities. In addition, the consensus feature representation used for final classification is enhanced via a view-specific consistency preserved contrastive learning strategy. Extensive experiments on four multi-omics datasets are conducted to demonstrate the efficacy of the proposed GCFANet.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] The Omics Dashboard for Interactive Exploration of Metabolomics and Multi-Omics Data
    Paley, Suzanne
    Karp, Peter D.
    METABOLITES, 2024, 14 (01)
  • [42] CMSE: Cross-Modal Semantic Enhancement Network for Classification of Hyperspectral and LiDAR Data
    Han, Wenqi
    Miao, Wang
    Geng, Jie
    Jiang, Wen
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 (1-14): : 1 - 14
  • [43] Joint learning of cross-modal classifier and factor analysis for multimedia data classification
    Kanghong Duan
    Hongxin Zhang
    Jim Jing-Yan Wang
    Neural Computing and Applications, 2016, 27 : 459 - 468
  • [44] Joint learning of cross-modal classifier and factor analysis for multimedia data classification
    Duan, Kanghong
    Zhang, Hongxin
    Wang, Jim Jing-Yan
    NEURAL COMPUTING & APPLICATIONS, 2016, 27 (02): : 459 - 468
  • [45] MOMA: a multi-task attention learning algorithm for multi-omics data interpretation and classification
    Moon, Sehwan
    Lee, Hyunju
    BIOINFORMATICS, 2022, 38 (08) : 2287 - 2296
  • [46] Multi-task framework based on feature separation and reconstruction for cross-modal retrieval
    Zhang, Li
    Wu, Xiangqian
    PATTERN RECOGNITION, 2022, 122
  • [47] Classifying the multi-omics data of gastric cancer using a deep feature selection method
    Hu, Yanyu
    Zhao, Long
    Li, Zhao
    Dong, Xiangjun
    Xu, Tiantian
    Zhao, Yuhai
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 200
  • [48] Grouping by feature of cross-modal flankers in temporal ventriloquism
    Klimova, Michaela
    Nishida, Shin'ya
    Roseboom, Warrick
    SCIENTIFIC REPORTS, 2017, 7
  • [49] Learning Coupled Feature Spaces for Cross-modal Matching
    Wang, Kaiye
    He, Ran
    Wang, Wei
    Wang, Liang
    Tan, Tieniu
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 2088 - 2095
  • [50] Grouping by feature of cross-modal flankers in temporal ventriloquism
    Michaela Klimova
    Shin’ya Nishida
    Warrick Roseboom
    Scientific Reports, 7