A fully interpretable machine learning model for increasing the effectiveness of urine screening

被引:2
|
作者
Del Ben, Fabio [1 ]
Da Col, Giacomo [2 ]
Cobarzan, Doriana [2 ]
Turetta, Matteo [1 ]
Rubin, Daniela [3 ]
Buttazzi, Patrizio [3 ]
Antico, Antonio [3 ]
机构
[1] IRCCS, NCI, CRO Aviano, Aviano, Italy
[2] Fraunhofer Austria Res, KI4LIFE, Klagenfurt, Austria
[3] AULSS2 Marca Trevigiana, Treviso, Italy
关键词
urinalysis; machine learning; data science; decision tree; FLOW-CYTOMETRY; SYSMEX UF-1000I; DECISION TREES; DIAGNOSIS; CULTURE;
D O I
10.1093/ajcp/aqad099
中图分类号
R36 [病理学];
学科分类号
100104 ;
摘要
Objectives This article addresses the need for effective screening methods to identify negative urine samples before urine culture, reducing the workload, cost, and release time of results in the microbiology laboratory. We try to overcome the limitations of current solutions, which are either too simple, limiting effectiveness (1 or 2 parameters), or too complex, limiting interpretation, trust, and real-world implementation ("black box" machine learning models).Methods The study analyzed 15,312 samples from 10,534 patients with clinical features and the Sysmex Uf-1000i automated analyzer data. Decision tree (DT) models with or without lookahead strategy were used, as they offer a transparent set of logical rules that can be easily understood by medical professionals and implemented into automated analyzers.Results The best model achieved a sensitivity of 94.5% and classified negative samples based on age, bacteria, mucus, and 2 scattering parameters. The model reduced the workload by an additional 16% compared to the current procedure in the laboratory, with an estimated financial impact of euro40,000/y considering 15,000 samples/y. Identified logical rules have a scientific rationale matched to existing knowledge in the literature.Conclusions Overall, this study provides an effective and interpretable screening method for urine culture in microbiology laboratories, using data from the Sysmex UF-1000i automated analyzer. Unlike other machine learning models, our model is interpretable, generating trust and enabling real-world implementation.
引用
收藏
页码:620 / 632
页数:13
相关论文
共 50 条
  • [41] Interpretable federated learning for machine condition monitoring: Interpretable average global model as a fault feature library
    Feng, Xiao
    Wang, Dong
    Hou, Bingchang
    Yan, Tongtong
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 124
  • [42] Enhanced Critical Congenital Cardiac Disease Screening by Combining Interpretable Machine Learning Algorithms
    Lai, Zhengfeng
    Vadlaputi, Pranjali
    Tancredi, Daniel J.
    Garg, Meena
    Koppel, Robert, I
    Goodman, Mera
    Hogan, Whitnee
    Cresalia, Nicole
    Juergensen, Stephan
    Manalo, Erlinda
    Lakshminrusimha, Satyan
    Chuah, Chen-Nee
    Siefkes, Heather
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 1403 - 1406
  • [43] Interpretable machine learning in bioinformatics Introduction
    Cho, Young-Rae
    Kang, Mingon
    METHODS, 2020, 179 : 1 - 2
  • [44] Interpretable machine learning with reject option
    Brinkrolf, Johannes
    Hammer, Barbara
    AT-AUTOMATISIERUNGSTECHNIK, 2018, 66 (04) : 283 - 290
  • [45] A Survey of Interpretable Machine Learning Methods
    Wang, Yan
    Tuerhong, Gulanbaier
    2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI, 2022, : 232 - 237
  • [46] Interpretable discovery of semiconductors with machine learning
    Hitarth Choubisa
    Petar Todorović
    Joao M. Pina
    Darshan H. Parmar
    Ziliang Li
    Oleksandr Voznyy
    Isaac Tamblyn
    Edward H. Sargent
    npj Computational Materials, 9
  • [47] Interpretable machine learning for perturbation biology
    Shen, Judy
    Yuan, Bo
    Luna, Augustin
    Korkut, Anil
    Marks, Debora
    Ingraham, John
    Sander, Chris
    CANCER RESEARCH, 2020, 80 (16)
  • [48] Conceptual challenges for interpretable machine learning
    David S. Watson
    Synthese, 2022, 200
  • [49] Interpretable machine learning for materials design
    James Dean
    Matthias Scheffler
    Thomas A. R. Purcell
    Sergey V. Barabash
    Rahul Bhowmik
    Timur Bazhirov
    Journal of Materials Research, 2023, 38 : 4477 - 4496
  • [50] Interpretable discovery of semiconductors with machine learning
    Choubisa, Hitarth
    Todorovic, Petar
    Pina, Joao M. M.
    Parmar, Darshan H.
    Li, Ziliang
    Voznyy, Oleksandr
    Tamblyn, Isaac
    Sargent, Edward H.
    NPJ COMPUTATIONAL MATERIALS, 2023, 9 (01)