Diagnostic performance with and without artificial intelligence assistance in real-world screening mammography

被引:5
|
作者
Lee, Si Eun [1 ]
Hong, Hanpyo [1 ]
Kim, Eun-Kyung [1 ,2 ]
机构
[1] Yonsei Univ, Yongin Severance Hosp, Dept Radiol, Coll Med, Yongin, South Korea
[2] Yonsei Univ, Yongin Severance Hosp, Dept Radiol, Coll Med, 363 Dongbaekjukjeon Daero, Yongin, Gyeonggi Do, South Korea
关键词
Breast cancer; Digital mammography; Diagnosis; Computer; -assisted; Artificial intelligence; COMPUTER-AIDED DETECTION;
D O I
10.1016/j.ejro.2023.100545
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose: To evaluate artificial intelligence-based computer-aided diagnosis (AI-CAD) for screening mammography, we analyzed the diagnostic performance of radiologists by providing and withholding AI-CAD results alternatively every month. Methods: This retrospective study was approved by the institutional review board with a waiver for informed consent. Between August 2020 and May 2022, 1819 consecutive women (mean age 50.8 +/- 9.4 years) with 2061 screening mammography and ultrasound performed on the same day in a single institution were included. Radiologists interpreted screening mammography in clinical practice with AI-CAD results being provided or withheld alternatively by month. The AI-CAD results were retrospectively obtained for analysis even when withheld from radiologists. The diagnostic performances of radiologists and stand-alone AI-CAD were compared and the performances of radiologists with and without AI-CAD assistance were also compared by cancer detection rate, recall rate, sensitivity, specificity, accuracy and area under the receiver-operating-characteristics curve (AUC). Results: Twenty-nine breast cancer patients and 1790 women without cancers were included. Diagnostic performances of the radiologists did not significantly differ with and without AI-CAD assistance. Radiologists with AI-CAD assistance showed the same sensitivity (76.5%) and similar specificity (92.3% vs 93.8%), AUC (0.844 vs 0.851), and recall rates (8.8% vs. 7.4%) compared to standalone AI-CAD. Radiologists without AI-CAD assistance showed lower specificity (91.9% vs 94.6%) and accuracy (91.5% vs 94.1%) and higher recall rates (8.6% vs 5.9%, all p < 0.05) compared to stand-alone AI-CAD. Conclusion: Radiologists showed no significant difference in diagnostic performance when both screening mammography and ultrasound were performed with or without AI-CAD assistance for mammography. However, without AI-CAD assistance, radiologists showed lower specificity and accuracy and higher recall rates compared to stand-alone AI-CAD.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Real-world testing of artificial intelligence system for surgical safety management
    Tabuchi, Hitoshi
    Masumoto, Hiroki
    Adachi, Shoto
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2020, 61 (07)
  • [32] Artificial intelligence and surgical radiology - how it is shaping real-world management
    Chai, Victor
    Wirth, Lara
    Cao, Ke
    Lim, Lincoln
    Yeung, Justin
    ANZ JOURNAL OF SURGERY, 2024, 94 (11) : 1894 - 1896
  • [33] Real-world application, challenges and implication of artificial intelligence in healthcare: an essay
    Mudgal, Shiv Kumar
    Agarwal, Rajat
    Chaturvedi, Jitender
    Gaur, Rakhi
    Ranjan, Nishit
    PAN AFRICAN MEDICAL JOURNAL, 2022, 43
  • [34] Tu1060 ARTIFICIAL INTELLIGENCE IN COLONOSCOPY: A REAL-WORLD EVALUATION
    Schacher, Fernando
    da Rocha, Carolina
    Borba, Sophia
    Wolff, Fernando
    Grillo, Leonardo
    Pinto, Rafael
    Richinho, Thales
    Segal, Fabio
    GASTROINTESTINAL ENDOSCOPY, 2024, 99 (06) : AB39 - AB40
  • [35] Artificial Intelligence for the Real World
    Davenport, Thomas H.
    Ronanki, Rajeev
    HARVARD BUSINESS REVIEW, 2018, 96 (01) : 108 - 116
  • [36] Artificial intelligence and the real world
    Jenkins, A
    FUTURES, 2003, 35 (07) : 779 - 786
  • [37] Evaluation and Real-World Performance Monitoring of Artificial Intelligence Models in Clinical Practice: Try It, Buy It, Check It
    Allen, Bibb
    Dreyer, Keith
    Stibolt, Robert
    Agarwal, Sheela
    Coombs, Laura
    Treml, Chris
    Elkholy, Mona
    Brink, Laura
    Wald, Christoph
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2021, 18 (11) : 1489 - 1496
  • [38] Predictors for Non-Diagnostic Images in Real World Deployment of Artificial Intelligence Assisted Diabetic Retinopathy Screening
    Shou, Benjamin L.
    Venkatesh, Kesavan
    Chen, Chang
    Ghidey, Ronel
    Lee, Tim
    Wang, Jiangxia
    Liu, Alvin
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [39] Assessing characteristics that impact diagnostic accuracy of artificial intelligence when screening for diabetic retinopathy in real world settings
    Koca, Dilara
    Scheetz, Jane
    McGuinness, Myra B.
    He, Mingguang
    CLINICAL AND EXPERIMENTAL OPHTHALMOLOGY, 2019, 47 : 112 - 112
  • [40] Impact of artificial intelligence in breast cancer screening with mammography
    Lan-Anh Dang
    Chazard, Emmanuel
    Poncelet, Edouard
    Serb, Teodora
    Rusu, Aniela
    Pauwels, Xavier
    Parsy, Clemence
    Poclet, Thibault
    Cauliez, Hugo
    Engelaere, Constance
    Ramette, Guillaume
    Brienne, Charlotte
    Dujardin, Sofiane
    Laurent, Nicolas
    BREAST CANCER, 2022, 29 (06) : 967 - 977