Time-distanced gates in long short-term memory networks

被引:20
|
作者
Gao, Riqiang [1 ]
Tang, Yucheng [1 ]
Xu, Kaiwen [1 ]
Huo, Yuankai [1 ]
Bao, Shunxing [1 ]
Antic, Sanja L. [2 ]
Epstein, Emily S. [2 ]
Deppen, Steve [3 ]
Paulson, Alexis B. [4 ]
Sandler, Kim L. [4 ]
Massion, Pierre P. [2 ]
Landman, Bennett A. [1 ,4 ]
机构
[1] Vanderbilt Univ, Elect Engn & Comp Sci, Nashville, TN 37235 USA
[2] Vanderbilt Univ, Sch Med, Med, Nashville, TN 37235 USA
[3] Vanderbilt Univ, Med Ctr, Thorac Surg, Nashville, TN 37235 USA
[4] Vanderbilt Univ, Med Ctr, Radiol, Nashville, TN 37235 USA
关键词
Lung cancer diagnosis; Longitudinal; Distanced LSTM; Temporal Emphasis Model;
D O I
10.1016/j.media.2020.101785
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Long Short-Term Memory (LSTM) network is widely used in modeling sequential observations in fields ranging from natural language processing to medical imaging. The LSTM has shown promise for interpreting computed tomography (CT) in lung screening protocols. Yet, traditional image-based LSTM models ignore interval differences, while recently proposed interval-modeled LSTM variants are limited in their ability to interpret temporal proximity. Meanwhile, clinical imaging acquisition may be irregularly sampled, and such sampling patterns may be commingled with clinical usages. In this paper, we propose the Distanced L STM (DL STM) by introducing time-distanced (i.e., time distance to the last scan) gates with a temporal emphasis model (TEM) targeting at lung cancer diagnosis (i.e., evaluating the malignancy of pulmonary nodules) . Briefly, (1) the time distance of every scan to the last scan is modeled explicitly, (2) time-distanced input and forget gates in DLSTM are introduced across regular and irregular sampling sequences, and (3) the newer scan in serial data is emphasized by the TEM. The DLSTM algorithm is evaluated with both simulated data and real CT images (from 1794 National Lung Screening Trial (NLST) patients with longitudinal scans and 1420 clinical studied patients). Experimental results on simulated data indicate the DLSTM can capture families of temporal relationships that cannot be detected with traditional LSTM. Cross-validation on empirical CT datasets demonstrates that DLSTM achieves leading performance on both regularly and irregularly sampled data (e.g., improving LSTM from 0.6785 to 0.7085 on F1 score in NLST). In external-validation on irregularly acquired data, the benchmarks achieved 0.8350 (CNN feature) and 0.8380 (with LSTM) on AUC score, while the proposed DLSTM achieves 0.8905. In conclusion, the DLSTM approach is shown to be compatible with families of linear, quadratic, exponential, and log-exponential temporal models. The DLSTM can be readily extended with other temporal dependence interactions while hardly increasing overall model complexity. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Long short-term memory networks in memristor crossbar arrays
    Can Li
    Zhongrui Wang
    Mingyi Rao
    Daniel Belkin
    Wenhao Song
    Hao Jiang
    Peng Yan
    Yunning Li
    Peng Lin
    Miao Hu
    Ning Ge
    John Paul Strachan
    Mark Barnell
    Qing Wu
    R. Stanley Williams
    J. Joshua Yang
    Qiangfei Xia
    Nature Machine Intelligence, 2019, 1 : 49 - 57
  • [22] Classification of HRV using Long Short-Term Memory Networks
    Leite, Argentina
    Silva, Maria Eduarda
    Rocha, Ana Paula
    2020 11TH CONFERENCE OF THE EUROPEAN STUDY GROUP ON CARDIOVASCULAR OSCILLATIONS (ESGCO): COMPUTATION AND MODELLING IN PHYSIOLOGY NEW CHALLENGES AND OPPORTUNITIES, 2020,
  • [23] A New Delay Connection for Long Short-Term Memory Networks
    Wang, Jianyong
    Zhang, Lei
    Chen, Yuanyuan
    Yi, Zhang
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2018, 28 (06)
  • [24] CCG supertagging with bidirectional long short-term memory networks
    Kadari, Rekia
    Zhang, Yu
    Zhang, Weinan
    Liu, Ting
    NATURAL LANGUAGE ENGINEERING, 2018, 24 (01) : 77 - 90
  • [25] Deep Long Short-Term Memory Networks for Speech Recognition
    Chien, Jen-Tzung
    Misbullah, Alim
    2016 10TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2016,
  • [26] Dialogue Intent Classification with Long Short-Term Memory Networks
    Meng, Lian
    Huang, Minlie
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2017, 2018, 10619 : 42 - 50
  • [27] Optical Music Recognition by Long Short-Term Memory Networks
    Baro, Arnau
    Riba, Pau
    Calvo-Zaragoza, Jorge
    Fornes, Alicia
    GRAPHICS RECOGNITION: CURRENT TRENDS AND EVOLUTIONS, GREC 2017, 2018, 11009 : 81 - 95
  • [28] Facade Layout Completion with Long Short-Term Memory Networks
    Hensel, Simon
    Goebbels, Steffen
    Kada, Martin
    COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VISIGRAPP 2021, 2023, 1691 : 21 - 40
  • [29] Matching Biomedical Ontologies with Long Short-Term Memory Networks
    Jiang, Chao
    Xue, Xingsi
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 2484 - 2489
  • [30] Accelerating Inference In Long Short-Term Memory Neural Networks
    Mealey, Thomas
    Taha, Tarek M.
    NAECON 2018 - IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE, 2018, : 382 - 390