Fully interpretable deep learning model of transcriptional control

被引:16
|
作者
Liu, Yi [1 ]
Barr, Kenneth [2 ]
Reinitz, John [1 ,3 ,4 ,5 ]
机构
[1] Univ Chicago, Inst Genom & Syst Biol, Dept Stat, Chicago, IL 60637 USA
[2] Univ Chicago, Inst Genom & Syst Biol, Dept Human Genet, Chicago, IL 60637 USA
[3] Univ Chicago, Inst Genom & Syst Biol, Dept Ecol & Evolut, Chicago, IL 60637 USA
[4] Univ Chicago, Inst Genom & Syst Biol, Dept Mol Genet, Chicago, IL 60637 USA
[5] Univ Chicago, Inst Genom & Syst Biol, Dept Cell Biol, Chicago, IL 60637 USA
基金
美国国家卫生研究院;
关键词
COOPERATIVE DNA-BINDING; DROSOPHILA; EXPRESSION; ENHANCERS; STRIPE; SEGMENTATION; REPRESSION; MECHANISM; NETWORKS; SEQUENCE;
D O I
10.1093/bioinformatics/btaa506
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation: The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a `black box' approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modeling of transcriptional control in which mRNA production is controlled by the binding of specific transcriptionfactors to DNA, in part because such models are in part formulated in terms of specific chemical equationsthat appear different in form from those used in neural networks. Results: In this paper, we give an example of a DNN whichcan model the detailed control of transcription in a precise and predictive manner. Its internal structure is fully interpretableand is faithful to underlying chemistry of transcription factor binding to DNA. We derive our DNN from asystems biology model that was not previously recognized as having a DNN structure. Although we apply our DNNto data from the early embryo of the fruit fly Drosophila, this system serves as a test bed for analysis of much larger datasets obtained by systems biology studies on a genomic scale.
引用
收藏
页码:499 / 507
页数:9
相关论文
共 50 条
  • [31] Methodology for Interpretable Reinforcement Learning Model for HVAC Energy Control
    Kotevska, Olivera
    Munk, Jeffrey
    Kurte, Kuldeep
    Du, Yan
    Amasyali, Kadir
    Smith, Robert W.
    Zandi, Helia
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 1555 - 1564
  • [32] Monkeypox Diagnosis With Interpretable Deep Learning
    Ahsan, Md. Manjurul
    Ali, Md. Shahin
    Hassan, Md. Mehedi
    Abdullah, Tareque Abu
    Gupta, Kishor Datta
    Bagci, Ulas
    Kaushal, Chetna
    Soliman, Naglaa F.
    IEEE ACCESS, 2023, 11 : 81965 - 81980
  • [33] Interpretable Deep Learning under Fire
    Zhang, Xinyang
    Wang, Ningfei
    Shen, Hua
    Ji, Shouling
    Luo, Xiapu
    Wang, Ting
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1659 - 1676
  • [34] Interpretable Control by Reinforcement Learning
    Hein, Daniel
    Limmer, Steffen
    Runkler, Thomas A.
    IFAC PAPERSONLINE, 2020, 53 (02): : 8082 - 8089
  • [35] An Interpretable Deep Learning Model for Speech Activity Detection Using Electrocorticographic Signals
    Stuart, Morgan
    Lesaja, Srdjan
    Shih, Jerry J.
    Schultz, Tanja
    Manic, Milos
    Krusienski, Dean J.
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2022, 30 : 2783 - 2792
  • [36] A robust and interpretable end-to-end deep learning model for cytometry data
    Hu, Zicheng
    Tang, Alice
    Singh, Jaiveer
    Bhattacharya, Sanchita
    Butte, Atul J.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (35) : 21373 - 21380
  • [37] Towards an interpretable deep learning model for mobile malware detection and family identification
    Iadarola, Giacomo
    Martinelli, Fabio
    Mercaldo, Francesco
    Santone, Antonella
    COMPUTERS & SECURITY, 2021, 105
  • [38] Learning Interpretable Deep State Space Model for Probabilistic Time Series Forecasting
    Li, Longyuan
    Yan, Junchi
    Yang, Xiaokang
    Jin, Yaohui
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2901 - 2908
  • [39] An interpretable hybrid deep learning model for flood forecasting based on Transformer and LSTM
    Li, Wenzhong
    Liu, Chengshuai
    Xu, Yingying
    Niu, Chaojie
    Li, Runxi
    Li, Ming
    Hu, Caihong
    Tian, Lu
    JOURNAL OF HYDROLOGY-REGIONAL STUDIES, 2024, 54
  • [40] An interpretable deep-learning model for early prediction of sepsis in the emergency department
    Zhang, Dongdong
    Yin, Changchang
    Hunold, Katherine M.
    Jiang, Xiaoqian
    Caterino, Jeffrey M.
    Zhang, Ping
    PATTERNS, 2021, 2 (02):