An In-Situ Visual Analytics Framework for Deep Neural Networks

被引:0
|
作者
Li, Guan [1 ]
Wang, Junpeng [2 ]
Wang, Yang [1 ,3 ]
Shan, Guihua [1 ,3 ]
Zhao, Ying [4 ]
机构
[1] Chinese Acad Sci, Comp Network Informat Ctr, Beijing 100045, Peoples R China
[2] Visa Res, Palo Alto, CA 94306 USA
[3] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
[4] Cent South Univ, Changsha 410017, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning model; gaussian mixture model; in-situ; visual analytics; VISUALIZATION; INFORMATION; LIKELIHOOD;
D O I
10.1109/TVCG.2023.3339585
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The past decade has witnessed the superior power of deep neural networks (DNNs) in applications across various domains. However, training a high-quality DNN remains a non-trivial task due to its massive number of parameters. Visualization has shown great potential in addressing this situation, as evidenced by numerous recent visualization works that aid in DNN training and interpretation. These works commonly employ a strategy of logging training-related data and conducting post-hoc analysis. Based on the results of offline analysis, the model can be further trained or fine-tuned. This strategy, however, does not cope with the increasing complexity of DNNs, because (1) the time-series data collected over the training are usually too large to be stored entirely; (2) the huge I/O overhead significantly impacts the training efficiency; (3) post-hoc analysis does not allow rapid human-interventions (e.g., stop training with improper hyper-parameter settings to save computational resources). To address these challenges, we propose an in-situ visualization and analysis framework for the training of DNNs. Specifically, we employ feature extraction algorithms to reduce the size of training-related data in-situ and use the reduced data for real-time visual analytics. The states of model training are disclosed to model designers in real-time, enabling human interventions on demand to steer the training. Through concrete case studies, we demonstrate how our in-situ framework helps deep learning experts optimize DNNs and improve their analysis efficiency.
引用
收藏
页码:6770 / 6786
页数:17
相关论文
共 50 条
  • [41] Visual Analytics for Anomaly Classification in LAN Based on Deep Convolutional Neural Network
    Sun, Yuwei
    Esaki, Hiroshi
    Ochiai, Hideya
    2020 JOINT 9TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2020 4TH INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR), 2020,
  • [42] Visual Emotion Recognition Using Deep Neural Networks
    Iliev, Alexander I.
    Mote, Ameya
    DIGITAL PRESENTATION AND PRESERVATION OF CULTURAL AND SCIENTIFIC HERITAGE, 2022, 12 : 77 - 88
  • [43] COMPRESSING DEEP NEURAL NETWORKS FOR EFFICIENT VISUAL INFERENCE
    Ge, Shiming
    Luo, Zhao
    Zhao, Shengwei
    Jin, Xin
    Zhang, Xiao-Yu
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 667 - 672
  • [44] Visual number sense in untrained deep neural networks
    Kim, Gwangsu
    Jang, Jaeson
    Baek, Seungdae
    Song, Min
    Paik, Se-Bum
    SCIENCE ADVANCES, 2021, 7 (01):
  • [45] Visual analytics of brain networks
    Li, Kaiming
    Guo, Lei
    Faraco, Carlos
    Zhu, Dajiang
    Chen, Hanbo
    Yuan, Yixuan
    Lv, Jinglei
    Deng, Fan
    Jiang, Xi
    Zhang, Tuo
    Hu, Xintao
    Zhang, Degang
    Miller, L. Stephen
    Liu, Tianming
    NEUROIMAGE, 2012, 61 (01) : 82 - 97
  • [46] Colour Visual Coding in trained Deep Neural Networks
    Rafegas, Ivet
    Vanrell, Maria
    PERCEPTION, 2016, 45 : 214 - 214
  • [47] Deep Neural Networks for Modeling Visual Perceptual Learning
    Wenliang, Li K.
    Seitz, Aaron R.
    JOURNAL OF NEUROSCIENCE, 2018, 38 (27): : 6028 - 6044
  • [48] Comparison of visual quantities in untrained deep neural networks
    Lee, Hyeonsu
    Choi, Woochul
    Lee, Dongil
    Paik, Se-Bum
    JOURNAL OF COMPUTATIONAL NEUROSCIENCE, 2023, 51 : S79 - S79
  • [49] Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
    Jagtap, Ameya D.
    Shin, Yeonjong
    Kawaguchi, Kenji
    Karniadakis, George Em
    NEUROCOMPUTING, 2022, 468 (165-180) : 165 - 180
  • [50] A novel one-stage framework for visual pulse rate estimation using deep neural networks
    Huang, Bin
    Lin, Chun-Liang
    Chen, Weihai
    Juang, Chia-Feng
    Wu, Xingming
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 66