Fast machine learning annotation in the medical domain: a semi-automated video annotation tool for gastroenterologists

被引:22
|
作者
Krenzer, Adrian [1 ]
Makowski, Kevin [1 ]
Hekalo, Amar [1 ]
Fitting, Daniel [2 ]
Troya, Joel [2 ]
Zoller, Wolfram G. [3 ]
Hann, Alexander [2 ]
Puppe, Frank [1 ]
机构
[1] Dept Artificial Intelligence & Knowledge Syst, Sanderring 2, D-97070 Wurzburg, Germany
[2] Univ Hosp Wurzburg, Dept Internal Med 2, Intervent & Expt Endoscopy & InExEn, Oberdurrbacher Str 6, D-97080 Wurzburg, Germany
[3] Katharinen Hosp, Dept Internal Med & Gastroenterol, Kriegsbergstr 60, D-70174 Stuttgart, Germany
关键词
Machine learning; Deep learning; Annotation; Endoscopy; Gastroenterology; Automation; Object detection; POLYP DETECTION; TRACKING;
D O I
10.1186/s12938-022-01001-x
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Background Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Methods In our framework, an expert reviews the video and annotates a few video frames to verify the object's annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. Results Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] CyAnno: a semi-automated approach for cell type annotation of mass cytometry datasets
    Kaushik, Abhinav
    Dunham, Diane
    He, Ziyuan
    Manohar, Monali
    Desai, Manisha
    Nadeau, Kari C.
    Andorf, Sandra
    BIOINFORMATICS, 2021, 37 (22) : 4164 - 4171
  • [32] TrackPad: Software for semi-automated single-cell tracking and lineage annotation
    Cornwell, J. A.
    Li, J.
    Mahadevan, S.
    Draper, J. S.
    Joun, G. L.
    Zoellner, H.
    Asli, N. S.
    Harvey, R. P.
    Nordon, R. E.
    SOFTWAREX, 2020, 11
  • [33] SEMI-AUTOMATED ANNOTATION TOOL OUTPERFORMS TRAINED MEDICAL STUDENTS AND IS COMPARABLE TO CLINICAL EXPERT PERFORMANCE FOR FRAME-LEVEL DETECTION OF COLORECTAL POLYPS
    Eelbode, Tom
    Ahmad, Omer F.
    Sinonquel, Pieter
    Kocadag, Timon Blakemore
    Narayan, Neil
    Rana, Nikita
    Maes, Frederik
    Lovat, Laurence B.
    Bisschops, Raf
    GASTROINTESTINAL ENDOSCOPY, 2021, 93 (06) : AB202 - AB202
  • [34] On-the-fly point annotation for fast medical video labeling
    Meyer, Adrien
    Mazellier, Jean-Paul
    Dana, Jeremy
    Padoy, Nicolas
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2024, 19 (06) : 1093 - 1101
  • [35] Machine learning for semi-automated scoping reviews
    Mozgai, Sharon
    Kaurloto, Cari
    Winn, Jade
    Leeds, Andrew
    Heylen, Dirk
    Hartholt, Arno
    Scherer, Stefan
    INTELLIGENT SYSTEMS WITH APPLICATIONS, 2023, 19
  • [36] Machine learning for semi-automated meteorite recovery
    Anderson, Seamus
    Towner, Martin
    Bland, Phil
    Haikings, Christopher
    Volante, William
    Sansom, Eleanor
    Devillepoix, Hadrien
    Shober, Patrick
    Hartig, Benjamin
    Cupak, Martin
    Jansen-Sturgeon, Trent
    Howie, Robert
    Benedix, Gretchen
    Deacon, Geoff
    METEORITICS & PLANETARY SCIENCE, 2020, 55 (11) : 2461 - 2471
  • [37] Enhanced semi-supervised learning for automatic video annotation
    Wang, Meng
    Hua, Xian-Sheng
    Dai, Li-Rong
    Song, Yan
    2006 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO - ICME 2006, VOLS 1-5, PROCEEDINGS, 2006, : 1485 - +
  • [38] Video annotation by active learning and semi-supervised ensembling
    Song, Yan
    Qi, Guo-Jun
    Hua, Xian-Sheng
    Dai, Li-Rong
    Wang, Ren-Hua
    2006 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO - ICME 2006, VOLS 1-5, PROCEEDINGS, 2006, : 933 - 936
  • [39] Learning Semantic Traversability With Egocentric Video and Automated Annotation Strategy
    Kim, Yunho
    Lee, Jeong Hyun
    Lee, Choongin
    Mun, Juhyeok
    Youm, Donghoon
    Park, Jeongsoo
    Hwangbo, Jemin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (11): : 10423 - 10430
  • [40] Semi-automatic tool for motion annotation on complex video sequences
    Mahmood, M. H.
    Salvi, J.
    Llado, X.
    ELECTRONICS LETTERS, 2016, 52 (08) : 602 - 603