Active tool use with the contralesional hand can reduce cross-modal extinction of touch on that hand

被引:56
|
作者
Maravita, A [1 ]
Clarke, K
Husain, M
Driver, J
机构
[1] UCL, Inst Cognit Neurosci, 17 Queen Sq, London WC1N 3AR, England
[2] Imperial Coll Sch Med, London, England
基金
英国医学研究理事会;
关键词
extinction; neglect; vision; touch; cross-modal integration; brain lesion; tool use; rehabilitation;
D O I
10.1093/neucas/8.6.411
中图分类号
R74 [神经病学与精神病学];
学科分类号
摘要
After a unilateral brain lesion, patients may show cross-modal, visual-tactile extinction. Such patients may fail to report tactile stimuli on the contralesional hand when presented together with competing visual stimuli near the ipsilesional hand. In this work we tested the hypothesis that this cross-modal extinction may be reduced when a patient has used a tool with the contralesional hand to reach for objects in the ipsilesional visual field. Consistent with previous work, we hypothesize that active use of a tool may extend cross-modal interactions between visual stimuli at the tip of the tool and tactile stimuli on the hand wielding the tool. In the new situation of a tool connecting the contralesional hand with ipsilesional visual space, competition between stimuli on these opposite sides may be reduced, so that extinction decreases. We studied patient BV, who showed reliable cross-modal, visual-tactile extinction after right-hemisphere stroke. In two separate sessions we showed that prolonged tool use (10-20 min) with the contralesional hand in ipsilesional space reduced cross-modal extinction for up to 60-90 min post-training. We propose that an actively used tool may be effective in linking cross-modal stimuli presented along its extension. This can then overcome competition between stimuli presented on opposite sides of the body midline, thus modulating extinction.
引用
收藏
页码:411 / 416
页数:6
相关论文
共 32 条
  • [11] Biologically inspired automatic construction of cross-modal mapping in robotic eye/hand systems
    Meng, Q.
    Lee, M. H.
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 2006, : 4742 - +
  • [12] Estimation of Hand Pressure and Pose From RGB Images Based on Cross-Modal Cues
    Tang, Wei
    Shao, Liangjing
    Chen, Xinrong
    IEEE SENSORS JOURNAL, 2025, 25 (01) : 2030 - 2039
  • [13] 3D Hand Pose Estimation with Disentangled Cross-Modal Latent Space
    Gu, Jiajun
    Wang, Zhiyong
    Ouyang, Wanli
    Zhang, Weichen
    Li, Jiafeng
    Zhuo, Li
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 380 - 389
  • [14] A fMRI study of the cross-modal interaction in the brain with an adaptable EMG prosthetic hand with biofeedback
    Arieta, Alejandro Hernandez
    Kato, Ryu
    Yokoi, Hiroshi
    Arai, Tarnio
    2006 28TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, VOLS 1-15, 2006, : 3194 - +
  • [15] Neural Closed-loop Control of a Hand Prosthesis using Cross-modal Haptic Feedback
    Gibson, Alison
    Artemiadis, Panagiotis
    PROCEEDINGS OF THE IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON REHABILITATION ROBOTICS (ICORR 2015), 2015, : 37 - 42
  • [16] Cross-modal representations of first-hand and vicarious pain, disgust and fairness in insular and cingulate cortex
    Corradi-Dell'Acqua, Corrado
    Tusche, Anita
    Vuilleumier, Patrik
    Singer, Tania
    NATURE COMMUNICATIONS, 2016, 7
  • [17] Cross-modal representations of first-hand and vicarious pain, disgust and fairness in insular and cingulate cortex
    Corrado Corradi-Dell’Acqua
    Anita Tusche
    Patrik Vuilleumier
    Tania Singer
    Nature Communications, 7
  • [18] Cross-Modal Pixel-and-Stroke representation aligning networks for free-hand sketch recognition
    Zhou, Yang
    Wang, Jin
    Yang, Jingru
    Ni, Ping
    Lu, Guodong
    Fang, Heming
    Li, Zhihui
    Yu, Huan
    Huang, Kaixiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 240
  • [19] MultiHGR: Multi-Task Hand Gesture Recognition with Cross-Modal Wrist-Worn Devices
    Lyu, Mengxia
    Zhou, Hao
    Guo, Kaiwen
    Zhou, Wangqiu
    Shen, Xingfa
    Gu, Yu
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 961 - 970
  • [20] "Can touch this": Cross-modal shape categorization performance is associated with microstructural characteristics of white matter association pathways
    Masson, Haemy Lee
    Wallraven, Christian
    Petit, Laurent
    HUMAN BRAIN MAPPING, 2017, 38 (02) : 842 - 854