Global and cross-modal feature aggregation for multi-omics data classification and on

被引:10
|
作者
Zheng, Xiao [1 ]
Wang, Minhui [2 ]
Huang, Kai [3 ]
Zhu, En [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp, Changsha 410073, Peoples R China
[2] Nanjing Med Univ, Kangda Coll, Lianshui Peoples Hosp, Dept Pharm, Huaian 223300, Peoples R China
[3] Huazhong Univ Sci & Technol, Union Hosp, Tongji Med Coll, Clin Ctr Human Gene Res, Wuhan 430030, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-omics data classification; Multi-modal learning; Cross-modal fusion; Contrastive learning; NETWORK; FUSION; GRAPH; MULTIMODALITY; PREDICTION; BIOLOGY;
D O I
10.1016/j.inffus.2023.102077
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With rapid development of single-cell multi-modal sequencing technologies, more and more multi-omics data come into being and provide a unique opportunity for the identification of distinct cell types at the single-cell level. Therefore, it is important to integrate different modalities which are with high-dimensional features for boosting final multi-omics data classification performance. However, existing multi-omics data classification methods mainly focus on exploiting the complementary information of different modalities, while ignoring the learning confidence and cross-modal sample relationship during information fusion. In this paper, we propose a multi-omics data classification network via global and cross-modal feature aggregation, referred to as GCFANet. On one hand, considering that a large number of feature dimensions in different modalities could not contribute to final classification performance but disturb the discriminability of different samples, we propose a feature confidence learning mechanism to suppress some redundant features, as well as enhancing the expression of discriminative feature dimensions in each modality. On the other hand, in order to capture the inherent sample structure information implied in each modality, we design a graph convolutional network branch to learn the corresponding structure preserved feature representation. Then the modal-specific feature representations are concatenated and input to a transformer induced global and cross-modal feature aggregation module for learning consensus feature representation from different modalities. In addition, the consensus feature representation used for final classification is enhanced via a view-specific consistency preserved contrastive learning strategy. Extensive experiments on four multi-omics datasets are conducted to demonstrate the efficacy of the proposed GCFANet.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Unsupervised learning of cross-modal mappings in multi-omics data for survival stratification of gastric cancer
    Xu, Jianmin
    Xu, Binghua
    Li, Yipeng
    Su, Zhijian
    Yao, Yueping
    FUTURE ONCOLOGY, 2021, 18 (02) : 215 - 230
  • [2] A cross-modal feature aggregation and enhancement network for hyperspectral and LiDAR joint classification
    Zhang, Yiyan
    Gao, Hongmin
    Zhou, Jun
    Zhang, Chenkai
    Ghamisi, Pedram
    Xu, Shufang
    Li, Chenming
    Zhang, Bing
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 258
  • [3] Dynamic Cross-Modal Feature Interaction Network for Hyperspectral and LiDAR Data Classification
    Lin, Junyan
    Gao, Feng
    Qi, Lin
    Dong, Junyu
    Du, Qian
    Gao, Xinbo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [4] A Weighted Cross-Modal Feature Aggregation Network for Rumor Detection
    Li, Jia
    Hu, Zihan
    Yang, Zhenguo
    Lee, Lap-Kei
    Wang, Fu Lee
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT VI, PAKDD 2024, 2024, 14650 : 42 - 53
  • [5] Cross-Modal Retrieval Augmentation for Multi-Modal Classification
    Gur, Shir
    Neverova, Natalia
    Stauffer, Chris
    Lim, Ser-Nam
    Kiela, Douwe
    Reiter, Austin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 111 - 123
  • [6] Stability of Feature Selection in Multi-Omics Data Analysis
    Lukaszuk, Tomasz
    Krawczuk, Jerzy
    Zyla, Kamil
    Kesik, Jacek
    APPLIED SCIENCES-BASEL, 2024, 14 (23):
  • [7] Disparity Refinement Based on Cross-Modal Feature Fusion and Global Hourglass Aggregation for Robust Stereo Matching
    Wang, Gang
    Yang, Jinlong
    Wang, Yinghui
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VI, 2025, 15036 : 211 - 225
  • [8] Supervised cross-modal factor analysis for multiple modal data classification
    Wang, Jingbin
    Zhou, Yihua
    Duan, Kanghong
    Wang, Jim Jing-Yan
    Bensmail, Halima
    2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, : 1882 - 1888
  • [9] Benchmark study of feature selection strategies for multi-omics data
    Yingxia Li
    Ulrich Mansmann
    Shangming Du
    Roman Hornung
    BMC Bioinformatics, 23
  • [10] Benchmark study of feature selection strategies for multi-omics data
    Li, Yingxia
    Mansmann, Ulrich
    Du, Shangming
    Hornung, Roman
    BMC BIOINFORMATICS, 2022, 23 (01)