Fully interpretable deep learning model of transcriptional control

被引:16
|
作者
Liu, Yi [1 ]
Barr, Kenneth [2 ]
Reinitz, John [1 ,3 ,4 ,5 ]
机构
[1] Univ Chicago, Inst Genom & Syst Biol, Dept Stat, Chicago, IL 60637 USA
[2] Univ Chicago, Inst Genom & Syst Biol, Dept Human Genet, Chicago, IL 60637 USA
[3] Univ Chicago, Inst Genom & Syst Biol, Dept Ecol & Evolut, Chicago, IL 60637 USA
[4] Univ Chicago, Inst Genom & Syst Biol, Dept Mol Genet, Chicago, IL 60637 USA
[5] Univ Chicago, Inst Genom & Syst Biol, Dept Cell Biol, Chicago, IL 60637 USA
基金
美国国家卫生研究院;
关键词
COOPERATIVE DNA-BINDING; DROSOPHILA; EXPRESSION; ENHANCERS; STRIPE; SEGMENTATION; REPRESSION; MECHANISM; NETWORKS; SEQUENCE;
D O I
10.1093/bioinformatics/btaa506
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation: The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a `black box' approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modeling of transcriptional control in which mRNA production is controlled by the binding of specific transcriptionfactors to DNA, in part because such models are in part formulated in terms of specific chemical equationsthat appear different in form from those used in neural networks. Results: In this paper, we give an example of a DNN whichcan model the detailed control of transcription in a precise and predictive manner. Its internal structure is fully interpretableand is faithful to underlying chemistry of transcription factor binding to DNA. We derive our DNN from asystems biology model that was not previously recognized as having a DNN structure. Although we apply our DNNto data from the early embryo of the fruit fly Drosophila, this system serves as a test bed for analysis of much larger datasets obtained by systems biology studies on a genomic scale.
引用
收藏
页码:499 / 507
页数:9
相关论文
共 50 条
  • [41] Interpretable deep learning LSTM model for intelligent economic decision-making
    Park, Sangjin
    Yang, Jae-Suk
    KNOWLEDGE-BASED SYSTEMS, 2022, 248
  • [42] These do not Look Like Those: An Interpretable Deep Learning Model for Image Recognition
    Singh, Gurmail
    Yow, Kin-Choong
    IEEE ACCESS, 2021, 9 (09): : 41482 - 41493
  • [43] AIST: An Interpretable Attention-Based Deep Learning Model for Crime Prediction
    Rayhan, Yeasir
    Hashem, Tanzima
    ACM TRANSACTIONS ON SPATIAL ALGORITHMS AND SYSTEMS, 2023, 9 (02)
  • [44] A robust and interpretable, end-to-end deep learning model for cytometry data
    Hu, Zicheng
    Tang, Alice
    Singh, Jaiveer
    Bhattacharya, Sanchita
    Butte, Atul
    JOURNAL OF IMMUNOLOGY, 2020, 204 (01):
  • [45] Interpretable deep learning LSTM model for intelligent economic decision-making
    Park, Sangjin
    Yang, Jae-Suk
    Knowledge-Based Systems, 2022, 248
  • [46] Dysarthria detection based on a deep learning model with a clinically-interpretable layer
    Xu, Lingfeng
    Liss, Julie
    Berisha, Visar
    JASA EXPRESS LETTERS, 2023, 3 (01):
  • [47] Using an interpretable deep learning model for the prediction of riverine suspended sediment load
    Mohammadi-Raigani Z.
    Gholami H.
    Mohamadifar A.
    Samani A.N.
    Pradhan B.
    Environmental Science and Pollution Research, 2024, 31 (22) : 32480 - 32493
  • [48] Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees
    Liu, Guiliang
    Schulte, Oliver
    Zhu, Wang
    Li, Qingcan
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT II, 2019, 11052 : 414 - 429
  • [49] An interpretable and explainable deep learning model for predicting hydrogen solubility in diverse chemicals
    Youcefi, Mohamed Riad
    Alqahtani, Fahd Mohamad
    Amar, Menad Nait
    Djema, Hakim
    Ghasemi, Mohammad
    CHEMICAL ENGINEERING SCIENCE, 2025, 304
  • [50] A Gray Box Interpretable Visual Debugging Approach for Deep Sequence Learning Model
    Islam, Md Mofijul
    Dehnath, Amar
    Al Sayeed, Tahsin
    Rahman, Md Mahmudur
    Khan, Md. Mosaddek
    Shatabda, Swakkhar
    Islam, Anik
    PROCEEDINGS OF 2019 IEEE REGION 10 SYMPOSIUM (TENSYMP), 2019, : 825 - 830