Fully interpretable deep learning model of transcriptional control

被引:16
|
作者
Liu, Yi [1 ]
Barr, Kenneth [2 ]
Reinitz, John [1 ,3 ,4 ,5 ]
机构
[1] Univ Chicago, Inst Genom & Syst Biol, Dept Stat, Chicago, IL 60637 USA
[2] Univ Chicago, Inst Genom & Syst Biol, Dept Human Genet, Chicago, IL 60637 USA
[3] Univ Chicago, Inst Genom & Syst Biol, Dept Ecol & Evolut, Chicago, IL 60637 USA
[4] Univ Chicago, Inst Genom & Syst Biol, Dept Mol Genet, Chicago, IL 60637 USA
[5] Univ Chicago, Inst Genom & Syst Biol, Dept Cell Biol, Chicago, IL 60637 USA
基金
美国国家卫生研究院;
关键词
COOPERATIVE DNA-BINDING; DROSOPHILA; EXPRESSION; ENHANCERS; STRIPE; SEGMENTATION; REPRESSION; MECHANISM; NETWORKS; SEQUENCE;
D O I
10.1093/bioinformatics/btaa506
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation: The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a `black box' approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modeling of transcriptional control in which mRNA production is controlled by the binding of specific transcriptionfactors to DNA, in part because such models are in part formulated in terms of specific chemical equationsthat appear different in form from those used in neural networks. Results: In this paper, we give an example of a DNN whichcan model the detailed control of transcription in a precise and predictive manner. Its internal structure is fully interpretableand is faithful to underlying chemistry of transcription factor binding to DNA. We derive our DNN from asystems biology model that was not previously recognized as having a DNN structure. Although we apply our DNNto data from the early embryo of the fruit fly Drosophila, this system serves as a test bed for analysis of much larger datasets obtained by systems biology studies on a genomic scale.
引用
收藏
页码:499 / 507
页数:9
相关论文
共 50 条
  • [1] Towards an interpretable deep learning model of cancer
    Nilsson, Avlant
    Meimetis, Nikolaos
    Lauffenburger, Douglas A.
    NPJ PRECISION ONCOLOGY, 2025, 9 (01)
  • [2] Fully Interpretable Deep Learning Model Using IR Thermal Images for Possible Breast Cancer Cases
    Mirasbekov, Yerken
    Aidossov, Nurduman
    Mashekova, Aigerim
    Zarikas, Vasilios
    Zhao, Yong
    Ng, Eddie Yin Kwee
    Midlenko, Anna
    BIOMIMETICS, 2024, 9 (10)
  • [3] An Interpretable Deep Learning Model for Automatic Sound Classification
    Zinemanas, Pablo
    Rocamora, Martin
    Miron, Marius
    Font, Frederic
    Serra, Xavier
    ELECTRONICS, 2021, 10 (07)
  • [4] A Novel Interpretable Deep Learning Model for Ozone Prediction
    Chen, Xingguo
    Li, Yang
    Xu, Xiaoyan
    Shao, Min
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [5] Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis
    Liao, WangMin
    Zou, BeiJi
    Zhao, RongChang
    Chen, YuanQiong
    He, ZhiYou
    Zhou, MengJie
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (05) : 1405 - 1412
  • [6] Using interpretable deep learning to model cancer dependencies
    Lin, Chih-Hsu
    Lichtarge, Olivier
    BIOINFORMATICS, 2021, 37 (17) : 2675 - 2681
  • [7] A fully interpretable machine learning model for increasing the effectiveness of urine screening
    Del Ben, Fabio
    Da Col, Giacomo
    Cobarzan, Doriana
    Turetta, Matteo
    Rubin, Daniela
    Buttazzi, Patrizio
    Antico, Antonio
    AMERICAN JOURNAL OF CLINICAL PATHOLOGY, 2023, 160 (06) : 620 - 632
  • [8] Deep learning framework for interpretable quality control of echocardiography video
    Du, Liwei
    Xue, Wufeng
    Qi, Zhanru
    Shi, Zhongqing
    Guo, Guanjun
    Yang, Xin
    Ni, Dong
    Yao, Jing
    MEDICAL PHYSICS, 2025,
  • [9] An interpretable deep learning model to map land subsidence hazard
    Rahmani, Paria
    Gholami, Hamid
    Golzari, Shahram
    ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH, 2024, 31 (11) : 17372 - 17386
  • [10] Interpretable Deep Learning Prediction Model for Compressive Strength of Concrete
    Zhang, Wei-Qi
    Wang, Hui-Ming
    Dongbei Daxue Xuebao/Journal of Northeastern University, 2024, 45 (05): : 738 - 744