CNN based Anthropomorphic Model Observer for Defect Localization

被引:1
|
作者
Lorente, Iris [1 ]
Abbey, Craig [2 ]
Brankov, Jovan G. [1 ]
机构
[1] IIT, ECE Dept, Chicago, IL 60616 USA
[2] Univ Calif Santa Barbara, Dept Psychol & Brain Sci, Santa Barbara, CA 93106 USA
关键词
Model observer; medical image quality assessment; machine learning; deep learning; CNN; U-Net;
D O I
10.1117/12.2581119
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Model Observers (MO) are algorithms designed to evaluate and optimize the parameters of newly developed medical imaging technologies by providing a measure of human accuracy for a given diagnostic task. If designed well, these algorithms can expedite and reduce the expenses of coordinating sessions with radiologists to evaluate the diagnosis potential of such reconstruction technologies. During the last decade, classic machine learning techniques along with feature engineering have proved to be a good MO choice by allowing the models to be trained to detect or localize defects and therefore potentially reduce the extent of needed human observer studies. More recently, and with the developments in computer processing speed and capabilities, Convolutional Neural Networks (CNN) have been introduced as MOs eliminating the need of feature engineering. In this paper, we design, train and evaluate the accuracy of a fully convolutional U-Net structure as a MO for a defect forced-localization task in simulated images. This work focuses on the optimization of parameters, hyperparameters and choice of objective functions for CNN model training. Results are shown in the form of human accuracy vs model accuracy as well as efficiencies with respect to the ideal observer, and reveal a strong agreement between the human and the MO for the chosen defect localization task.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Implementation of an anthropomorphic model observer using convolutional neural network for breast tomosynthesis images
    Lee, Changwoo
    Baek, Jongduk
    MEDICAL IMAGING 2020: IMAGE PERCEPTION, OBSERVER PERFORMANCE, AND TECHNOLOGY ASSESSMENT, 2020, 11316
  • [22] Attention based CNN model for fire detection and localization in real-world images
    Majid, Saima
    Alenezi, Fayadh
    Masood, Sarfaraz
    Ahmad, Musheer
    Gunduz, Emine Selda
    Polat, Kemal
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 189
  • [23] Faster RCNN-CNN-Based Joint Model for Bird Part Localization in Images
    Pankajakshan, Arjun
    Bhavsar, Arnav
    PROCEEDINGS OF 3RD INTERNATIONAL CONFERENCE ON COMPUTER VISION AND IMAGE PROCESSING, CVIP 2018, VOL 2, 2020, 1024 : 197 - 212
  • [24] SP-ASDNET: CNN-LSTM BASED ASD CLASSIFICATION MODEL USING OBSERVER SCANPATHS
    Tao, Yudong
    Shyu, Mei-Ling
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 641 - 646
  • [25] Operational Amplifiers Defect Detection and Localization Using Digital Injectors and Observer Circuits
    Sekyere, Michael
    Saikiran, Marampally
    Chen, Degang
    ELECTRONICS, 2024, 13 (14)
  • [26] CNN BASED DEFECT RECOGNITION MODEL FOR PHASED ARRAY ULTRASONIC TESTING IMAGES OF ELECTROFUSION JOINTS
    Shi, Jianfeng
    Tao, Yangji
    Guo, Weican
    Zheng, Jinyang
    PROCEEDINGS OF THE ASME 2020 PRESSURE VESSELS & PIPING CONFERENCE (PVP2020), VOL 6, 2020,
  • [27] Deep Learning-Based Model for Defect Detection and Localization on Photovoltaic Panels
    Prabhakaran, S.
    Uthra, R. Annie
    Preetharoselyn, J.
    COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2023, 44 (03): : 2683 - 2700
  • [28] An ideal-observer model of human sound localization
    Reijniers, J.
    Vanderelst, D.
    Jin, C.
    Carlile, S.
    Peremans, H.
    BIOLOGICAL CYBERNETICS, 2014, 108 (02) : 169 - 181
  • [29] An ideal-observer model of human sound localization
    J. Reijniers
    D. Vanderelst
    C. Jin
    S. Carlile
    H. Peremans
    Biological Cybernetics, 2014, 108 : 169 - 181
  • [30] Integral sliding mode control for an anthropomorphic finger based on nonlinear extended state observer
    Zhao, Ling
    Peng, Meiqin
    Li, Zhuojun
    He, Minghui
    ISA TRANSACTIONS, 2024, 153 : 433 - 442