An Application-Dependent Framework for the Recognition of High-Level Surgical Tasks in the OR

被引:0
|
作者
Lalys, Florent [1 ]
Riffaud, Laurent [1 ]
Bouget, David [1 ]
Jannin, Pierre [1 ]
机构
[1] Fac Med CS 34317, INSERM, U746, F-35043 Rennes, France
关键词
Surgical phase; digital microscope; cataract surgeries; DTW; CLASSIFICATION; WORKFLOW;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Surgical process analysis and modeling is a recent and important topic aiming at introducing a new generation of computer-assisted surgical systems. Among all of the techniques already in use for extracting data from the Operating Room, the use of image videos allows automating the surgeons' assistance without altering the surgical routine. We proposed in this paper an application-dependent framework able to automatically extract the phases of the surgery only by using microscope videos as input data and that can be adaptable to different surgical specialties. First, four distinct types of classifiers based on image processing were implemented to extract visual cues from video frames. Each of these classifiers was related to one kind of visual cue: visual cues recognizable through color were detected with a color histogram approach, for shape-oriented visual cues we trained a Haar classifier, for texture-oriented visual cues we used a bag-of-word approach with SIFT descriptors, and for all other visual cues we used a classical image classification approach including a feature extraction, selection, and a supervised classification. The extraction of this semantic vector for each video frame then permitted to classify time series using either Hidden Markov Model or Dynamic Time Warping algorithms. The framework was validated on cataract surgeries, obtaining accuracies of 95%.
引用
收藏
页码:331 / 338
页数:8
相关论文
共 50 条
  • [1] A Framework for the Recognition of High-Level Surgical Tasks From Video Images for Cataract Surgeries
    Lalys, F.
    Riffaud, L.
    Bouget, D.
    Jannin, P.
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2012, 59 (04) : 966 - 976
  • [2] An Object SLAM Framework for Association, Mapping, and High-Level Tasks
    Wu, Yanmin
    Zhang, Yunzhou
    Zhu, Delong
    Deng, Zhiqiang
    Sun, Wenkai
    Chen, Xin
    Zhang, Jian
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (04) : 2912 - 2932
  • [3] Auto-context and its application to high-level vision tasks
    Tu, Zhuowen
    2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 735 - 742
  • [4] Shape Recognition in High-level Image Representations: Data Preparation and Framework of Recognition Method
    Lazarek, Jagoda
    Szczepaniak, Piotr S.
    BIOIMAGING: PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 2, 2019, : 57 - 64
  • [5] A new modular framework for high-level application development at HEPS
    Lu, Xiaohan
    Zhao, Yaliang
    Ji, Hongfei
    Jiao, Yi
    Li, Jingyi
    Li, Nan
    Meng, Cai
    Peng, Yuemei
    Ji, Daheng
    Wei, Yuanyuan
    Xu, Haisheng
    Pan, Weimin
    Xu, Gang
    JOURNAL OF SYNCHROTRON RADIATION, 2024, 31 (Pt 2) : 385 - 393
  • [6] Application of high-level decision diagrams for simulation-based verification tasks
    Jenihhin M.
    Raik J.
    Chepurov A.
    Ubar R.
    Estonian Journal of Engineering, 2010, 16 (01): : 56 - 77
  • [7] Accomplishing high-level tasks with modular robots
    Jing, Gangyuan
    Tosun, Tarik
    Yim, Mark
    Kress-Gazit, Hadas
    AUTONOMOUS ROBOTS, 2018, 42 (07) : 1337 - 1354
  • [8] INTERACTIVE ROBOT SIMULATOR FOR HIGH-LEVEL TASKS
    KITAJIMA, K
    COMPUTER-AIDED DESIGN, 1988, 20 (02) : 93 - 99
  • [9] High-Level Context Information for Tasks in Teaching
    Schulz, Renee
    Isabwe, Ghislain Maurice N.
    Prinz, Andreas
    Hara, Takahiro
    ADVANCES IN HUMAN FACTORS IN TRAINING, EDUCATION, AND LEARNING SCIENCES, AHFE 2017, 2018, 596 : 278 - 289
  • [10] Learning high-level tasks through imitation
    Chella, Antonio
    Dindo, Haris
    Infantino, Ignazio
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 2006, : 3648 - +