Investigation of Training Multiple Instance Learning Networks with Instance Sampling

被引:0
|
作者
Tarkhan, Aliasghar [1 ]
Trung Kien Nguyen [2 ]
Simon, Noah [1 ]
Dai, Jian [2 ]
机构
[1] Univ Washington, Dept Biostat, Seattle, WA 98195 USA
[2] Genentech Inc, PHC Imaging Grp, San Francisco, CA 94080 USA
关键词
Attention; Computational pathology; Deep learning; Multiple instance learning; Prostate cancer; Sampling; Transfer learning; Weekly supervised learning; Second keyword; PREDICTION;
D O I
10.1007/978-3-031-16876-5_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One challenge of training deep neural networks with gigapixel whole-slide images (WSIs) is the lack of annotation at pixel level or patch (instance) level due to the high cost and time-consuming labeling effort. Multiple instance learning (MIL) as a typical weakly supervised learning method aimed to resolve this challenge by using only the slide-level label without needing patch labels. Not all patches/instances are predictive of the outcome. The attention-based MIL method leverages this fact to enhance the performance by weighting the instances based on their contribution to predicting the outcome. A WSI typically contains hundreds of thousands of image patches. Training a deep neural network with thousands of image patches per slide is computationally expensive and requires a lot of time for convergence. One way to alleviate this issue is to sample a subset of instances/patches from the available instances within each bag for training. While the benefit of sampling strategies for decreasing computing time might be evident, there is a lack of effort to investigate their performances. This project proposes and compares an adaptive sampling strategy with other sampling strategies. Although all sampling strategies substantially reduce computation time, their performance is influenced by the number of selected instances. We show that if we are limited to only select a few instances (e.g., in order of 1 similar to 10 instances), the adaptive sampling outperforms other sampling strategies. However, if we are allowed to select more instances (e.g., in order of 100 similar to 1000 instances), the random sampling outperforms other sampling strategies.
引用
收藏
页码:95 / 104
页数:10
相关论文
共 50 条
  • [1] ATTENTION-BASED DEEP MULTIPLE INSTANCE LEARNING WITH ADAPTIVE INSTANCE SAMPLING
    Tarkhan, Aliasghar
    Trung Kien Nguyen
    Simon, Noah
    Bengtsson, Thomas
    Ocampo, Paolo
    Dai, Jian
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [2] Multiple Instance Learning for Training Neural Networks under Label Noise
    Duffner, Stefan
    Garcia, Christophe
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] MILIS: Multiple Instance Learning with Instance Selection
    Fu, Zhouyu
    Robles-Kelly, Antonio
    Zhou, Jun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (05) : 958 - 977
  • [4] An Instance Selection Approach to Multiple Instance Learning
    Fu, Zhouyu
    Robles-Kelly, Antonio
    CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4, 2009, : 911 - +
  • [5] Multiple Instance Detection Networks With Adaptive Instance Refinement
    Wu, Zhihao
    Wen, Jie
    Xu, Yong
    Yang, Jian
    Zhang, David
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 267 - 279
  • [6] MULTIPLE-INSTANCE LEARNING WITH PAIRWISE INSTANCE SIMILARITY
    Yuan, Liming
    Liu, Jiafeng
    Tang, Xianglong
    INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2014, 24 (03) : 567 - 577
  • [7] Salient Instance Selection for Multiple-Instance Learning
    Yuan, Liming
    Liu, Songbo
    Huang, Qingcheng
    Liu, Jiafeng
    Tang, Xianglong
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 58 - 67
  • [8] UNSUPERVISED MULTIPLE-INSTANCE LEARNING FOR INSTANCE SEARCH
    Wang, Zhenzhen
    Yuan, Junsong
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [9] An Instance Selection and Optimization Method for Multiple Instance Learning
    Zhao, Haifeng
    Mao, Wenbo
    Wang, Jiangtao
    2014 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2014, : 208 - 211
  • [10] Instance Label Prediction by Dirichlet Process Multiple Instance Learning
    Kandemir, Melih
    Hamprecht, Fred A.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2014, : 380 - 389