MGTDR: A Multi-modal Graph Transformer Network for Cancer Drug Response Prediction

被引:0
|
作者
Yan, Chi [1 ]
机构
[1] Officers Coll PAP, Dept Informat & Commun, Chengdu 610213, Peoples R China
关键词
Drug response prediction; multi-omics fusion; drug structure; graph convolutional neural network;
D O I
10.1109/ICAIBD62003.2024.10604610
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Drug response prediction in cancer cell lines can guide researchers to design personalized treatments for different patients. However, accurately predicting drug response remains a challenging task. This study proposes MGTDR, a multi-modal graph transformer framework for drug response prediction. First, using an auto-encoder, MGTDR learns the latent features of cancer cell lines. Secondly, It employs graph convolutional neural networks (GCN) and multi-layer perceptrons (MLP) to understand features of drugs from the simplified molecular input line entry specification (SMILES) and molecular fingerprints of drugs. Thirdly, it utilizes miRNA expression, DNA methylation, and drug physicochemical properties to calculate cell line similarity and drug similarity. Subsequently, it constructs a heterogeneous network by combining cell line similarity and drug similarity. The cell line features and drug features calculated earlier are then employed as the features of the nodes in the network. Finally, it applies graph transformer networks and MLP to predict drug sensitivity. Extensive experiments on publicly available datasets demonstrated the effectiveness and efficiency of the proposed method in predicting drug response and its potential value in guiding personalized therapy.
引用
收藏
页码:351 / 355
页数:5
相关论文
共 50 条
  • [21] MMGCN: Multi-modal multi-view graph convolutional networks for cancer prognosis prediction
    Yang, Ping
    Chen, Wengxiang
    Qiu, Hang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 257
  • [22] AAFormer: A Multi-Modal Transformer Network for Aerial Agricultural Images
    Shen, Yao
    Wang, Lei
    Jin, Yue
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 1704 - 1710
  • [23] Multi-modal mask Transformer network for social event classification
    Chen H.
    Qian S.
    Li Z.
    Fang Q.
    Xu C.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (02): : 579 - 587
  • [24] Multi-modal Graph Neural Network with Transformer-Guided Adaptive Diffusion for Preclinical Alzheimer Classification
    Sim, Jaeyoon
    Lee, Minjae
    Wu, Guorong
    Kim, Won Hwa
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT V, 2024, 15005 : 511 - 521
  • [25] Temporal multi-modal knowledge graph generation for link prediction
    Li, Yuandi
    Ji, Hui
    Yu, Fei
    Cheng, Lechao
    Che, Nan
    NEURAL NETWORKS, 2025, 185
  • [26] Adversarial Graph Attention Network for Multi-modal Cross-modal Retrieval
    Wu, Hongchang
    Guan, Ziyu
    Zhi, Tao
    zhao, Wei
    Xu, Cai
    Han, Hong
    Yang, Yarning
    2019 10TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK 2019), 2019, : 265 - 272
  • [27] CMGNet: Collaborative multi-modal graph network for video captioning
    Rao, Qi
    Yu, Xin
    Li, Guang
    Zhu, Linchao
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 238
  • [28] Hierarchical Multi-Modal Prompting Transformer for Multi-Modal Long Document Classification
    Liu, Tengfei
    Hu, Yongli
    Gao, Junbin
    Sun, Yanfeng
    Yin, Baocai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6376 - 6390
  • [29] Multi-Modal Structure-Embedding Graph Transformer for Visual Commonsense Reasoning
    Zhu, Jian
    Wang, Hanli
    He, Bin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1295 - 1305
  • [30] Tile Classification Based Viewport Prediction with Multi-modal Fusion Transformer
    Zhang, Zhihao
    Chen, Yiwei
    Zhang, Weizhan
    Yan, Caixia
    Zheng, Qinghua
    Wang, Qi
    Chen, Wangdu
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3560 - 3568