Interactive multi-modal suturing

被引:13
|
作者
Payandeh, Shahram [1 ]
Shi, Fuhan [1 ]
机构
[1] Simon Fraser Univ, Expt Robot & Graph Lab, Burnaby, BC V5A 1S6, Canada
关键词
Virtual suturing; Suture model; Wound closure; Tissue tearing; Haptic feedback; Surgical training environment; Serious games; SURGERY SIMULATION;
D O I
10.1007/s10055-010-0174-6
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We present a mechanics-based interactive multi-modal environment designed as part of a serious gaming platform. The specific objectives are to teach basic suturing and knotting techniques for simple skin or soft tissue wound closure. The pre-wound suturing target, skin, or deformable tissue is modeled as a modified mass-spring system. The suturing material is designed as a mechanics-based deformable linear object. Tools involved in a typical suturing procedures are also simulated. Collision management modules between the soft tissue and the needle, the soft tissue and the suture are analyzed. In addition to modeling the interactive environment of a typical suturing procedure, basics of the modeling approaches on the evaluation of a stitch formed by the user are also discussed. For example, if needle insertion points are too close from each other or to the edge of the wound, when the suture is pulled, the suture will tear the soft tissue instead of suturing the incision together. Experiment results show that our simulator can run on a standard personal computer and allow users to perform different suturing patterns with smooth graphics and haptic feedback.
引用
收藏
页码:241 / 253
页数:13
相关论文
共 50 条
  • [1] Interactive multi-modal suturing
    Shahram Payandeh
    Fuhan Shi
    Virtual Reality, 2010, 14 : 241 - 253
  • [2] Interactive multi-modal robot programming
    Iba, S
    Paredis, CJJ
    Khosla, PK
    2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2002, : 161 - 168
  • [3] Multi-modal navigation for interactive wheelchair
    Li, X
    Tan, TN
    Zhao, XJ
    ADVANCES IN MULTIMODAL INTERFACES - ICMI 2000, PROCEEDINGS, 2000, 1948 : 590 - 598
  • [4] Interactive multi-modal robot programming
    Iba, S
    Paredis, CJJ
    Khosla, PK
    EXPERIMENTAL ROBOTICS IX, 2006, 21 : 503 - +
  • [5] MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis
    Li, Shuzhen
    Zhang, Tong
    Chen, Bianna
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2796 - 2809
  • [6] Interactive Multi-Modal Question-Answering
    Orasan, Constantin
    COMPUTATIONAL LINGUISTICS, 2012, 38 (02) : 451 - 453
  • [7] Towards Multi-modal Interaction with Interactive Paint
    Torres, Nicholas
    Ortega, Francisco R.
    Bernal, Jonathan
    Barreto, Armando
    Rishe, Naphtali D.
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: METHODS, TECHNOLOGIES, AND USERS, UAHCI 2018, PT I, 2018, 10907 : 299 - 308
  • [8] Interactive and multi-modal visualization for neuroendoscopic interventions
    Bartz, D
    Strasser, W
    Gürvit, O
    Freudenstein, D
    Skalej, M
    DATA VISUALIZATION 2001, 2001, : 157 - +
  • [9] Multi-modal Interactive Video Retrieval with Temporal Queries
    Heller, Silvan
    Arnold, Rahel
    Gasser, Ralph
    Gsteiger, Viktor
    Parian-Scherb, Mahnaz
    Rossetto, Luca
    Sauter, Loris
    Spiess, Florian
    Schuldt, Heiko
    MULTIMEDIA MODELING, MMM 2022, PT II, 2022, 13142 : 493 - 498
  • [10] Interactive Multi-Modal Display Spaces for Visual Analysis
    Marrinan, Thomas
    Rizzi, Silvio
    Nishimoto, Arthur
    Johnson, Andrew
    Insley, Joseph A.
    Papka, Michael E.
    PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON INTERACTIVE SURFACES AND SPACES, (ISS 2016), 2016, : 421 - 426