Heterogeneous latent transfer learning in Gaussian graphical models

被引:0
|
作者
Wu, Qiong [1 ,2 ,3 ]
Wang, Chi [4 ,5 ]
Chen, Yong [1 ,2 ]
机构
[1] Univ Penn, Perelman Sch Med, Blockley Hall 602,423 Guardian Dr, Philadelphia, PA 19104 USA
[2] Univ Penn, Ctr Hlth AI & Synth Evidence CHASE, Philadelphia, PA 19104 USA
[3] Univ Pittsburgh, Dept Biostat, Pittsburgh, PA 15261 USA
[4] Univ Kentucky, Coll Med, Dept Internal Med, Div Canc Biostat, Lexington, KY 40536 USA
[5] Univ Kentucky, Dept Stat, Lexington, KY 40536 USA
基金
美国国家卫生研究院;
关键词
Gaussian graphical model; latent subpopulation; precision matrix; transfer learning; INVERSE COVARIANCE ESTIMATION; EXPRESSION; PHENOTYPE;
D O I
10.1093/biomtc/ujae096
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Gaussian graphical models (GGMs) are useful for understanding the complex relationships between biological entities. Transfer learning can improve the estimation of GGMs in a target dataset by incorporating relevant information from related source studies. However, biomedical research often involves intrinsic and latent heterogeneity within a study, such as heterogeneous subpopulations. This heterogeneity can make it difficult to identify informative source studies or lead to negative transfer if the source study is improperly used. To address this challenge, we developed a heterogeneous latent transfer learning (Latent-TL) approach that accounts for both within-sample and between-sample heterogeneity. The idea behind this approach is to "learn from the alike" by leveraging the similarities between source and target GGMs within each subpopulation. The Latent-TL algorithm simultaneously identifies common subpopulation structures among samples and facilitates the learning of target GGMs using source samples from the same subpopulation. Through extensive simulations and real data application, we have shown that the proposed method outperforms single-site learning and standard transfer learning that ignores the latent structures. We have also demonstrated the applicability of the proposed algorithm in characterizing gene co-expression networks in breast cancer patients, where the inferred genetic networks identified many biologically meaningful gene-gene interactions.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Learning Gaussian graphical models with latent confounders
    Wang, Ke
    Franks, Alexander
    Oh, Sang-Yun
    JOURNAL OF MULTIVARIATE ANALYSIS, 2023, 198
  • [2] Learning Latent Variable Gaussian Graphical Models
    Meng, Zhaoshi
    Eriksson, Brian
    Hero, Alfred O., III
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1269 - 1277
  • [3] Latent Gaussian Graphical Models with Golazo Penalty
    Rodriguez, Ignacio Echave-Sustaeta
    Rottger, Frank
    INTERNATIONAL CONFERENCE ON PROBABILISTIC GRAPHICAL MODELS, 2024, 246 : 199 - 212
  • [4] Joint estimation for multisource Gaussian graphical models based on transfer learning
    Zhang, Yuqi
    Yang, Yuehan
    PATTERN RECOGNITION, 2025, 158
  • [5] Learning Latent Tree Graphical Models
    Choi, Myung Jin
    Tan, Vincent Y. F.
    Anandkumar, Animashree
    Willsky, Alan S.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 1771 - 1812
  • [6] Inferring sparse Gaussian graphical models with latent structure
    Ambroise, Christophe
    Chiquet, Julien
    Matias, Catherine
    ELECTRONIC JOURNAL OF STATISTICS, 2009, 3 : 205 - 238
  • [7] Graph learning for latent-variable Gaussian graphical models under laplacian constraints
    Li, Ran
    Lin, Jiming
    Qiu, Hongbing
    Zhang, Wenhui
    Wang, Junyi
    NEUROCOMPUTING, 2023, 532 : 67 - 76
  • [8] Identifiability of directed Gaussian graphical models with one latent source
    Leung, Dennis
    Drton, Mathias
    Hara, Hisayuki
    ELECTRONIC JOURNAL OF STATISTICS, 2016, 10 (01): : 394 - 422
  • [9] Singular Gaussian graphical models: Structure learning
    Masmoudi, Khalil
    Masmoudi, Afif
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2018, 47 (10) : 3106 - 3117
  • [10] Unsupervised Learning with Truncated Gaussian Graphical Models
    Su, Qinliang
    Liao, Xuejun
    Li, Chunyuan
    Gan, Zhe
    Carin, Lawrence
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2583 - 2589