Cross-Modal Sensory Integration of Visual-Tactile Motion Information: Instrument Design and Human Psychophysics

被引:9
|
作者
Pei, Yu-Cheng [1 ,2 ,3 ]
Chang, Ting-Yu [1 ]
Lee, Tsung-Chi [1 ]
Saha, Sudipta [1 ]
Lai, Hsin-Yi [1 ]
Gomez-Ramirez, Manuel [4 ]
Chou, Shih-Wei [1 ]
Wong, Alice M. K. [1 ]
机构
[1] Chang Gung Mem Hosp Linkou, Dept Phys Med & Rehabil, Tao Yuan 333, Taiwan
[2] Chang Gung Univ, Hlth Aging Res Ctr, Tao Yuan 333, Taiwan
[3] Chang Gung Univ, Sch Med, Tao Yuan 333, Taiwan
[4] Johns Hopkins Univ, Zanvyl Krieger Mind Brain Inst, Baltimore, MD 21218 USA
来源
SENSORS | 2013年 / 13卷 / 06期
关键词
visual-tactile integration; direction of motion; congruency; haptic approach; tactile stimulator; MULTISENSORY INTEGRATION; CUES; STIMULATOR; TEXTURE; NEURONS; SLANT; TOUCH;
D O I
10.3390/s130607212
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Information obtained from multiple sensory modalities, such as vision and touch, is integrated to yield a holistic percept. As a haptic approach usually involves cross-modal sensory experiences, it is necessary to develop an apparatus that can characterize how a biological system integrates visual-tactile sensory information as well as how a robotic device infers object information emanating from both vision and touch. In the present study, we develop a novel visual-tactile cross-modal integration stimulator that consists of an LED panel to present visual stimuli and a tactile stimulator with three degrees of freedom that can present tactile motion stimuli with arbitrary motion direction, speed, and indentation depth in the skin. The apparatus can present cross-modal stimuli in which the spatial locations of visual and tactile stimulations are perfectly aligned. We presented visual-tactile stimuli in which the visual and tactile directions were either congruent or incongruent, and human observers reported the perceived visual direction of motion. Results showed that perceived direction of visual motion can be biased by the direction of tactile motion when visual signals are weakened. The results also showed that the visual-tactile motion integration follows the rule of temporal congruency of multi-modal inputs, a fundamental property known for cross-modal integration.
引用
收藏
页码:7212 / 7223
页数:12
相关论文
共 50 条
  • [1] Active Visual-Tactile Cross-Modal Matching
    Liu, Huaping
    Wang, Feng
    Sun, Fuchun
    Zhang, Xinyu
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2019, 11 (02) : 176 - 187
  • [2] Cross-modal interference in rapid serial visual-tactile, tactile-visual presentations
    Hashimoto, F
    Nakayama, M
    Hayashi, M
    Endo, Y
    PERCEPTION, 2005, 34 : 81 - 81
  • [3] Spatial constraints on visual-tactile cross-modal distractor congruency effects
    Spence, Charles
    Pavani, Francesco
    Driver, Jon
    COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE, 2004, 4 (02) : 148 - 169
  • [4] Spatial constraints on visual-tactile cross-modal distractor congruency effects
    Charles Spence
    Francesco Pavani
    Jon Driver
    Cognitive, Affective, & Behavioral Neuroscience, 2004, 4 : 148 - 169
  • [5] Lifelong Visual-Tactile Cross-Modal Learning for Robotic Material Perception
    Zheng, Wendong
    Liu, Huaping
    Sun, Fuchun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (03) : 1192 - 1203
  • [6] "Touching to See" and "Seeing to Feel": Robotic Cross-modal Sensory Data Generation for Visual-Tactile Perception
    Lee, Jet-Tsyn
    Bollegala, Danushka
    Luo, Shan
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4276 - 4282
  • [7] Visual-tactile cross-modal links in spatial selective attention: an MEG study
    Kida, Tetsuo
    Inui, Koji
    Tanaka, Emi
    Kakigi, Ryusuke
    NEUROSCIENCE RESEARCH, 2009, 65 : S238 - S238
  • [8] Visual-tactile processing in primary somatosensory cortex emerges before cross-modal experience
    Bieler, Malte
    Sieben, Kay
    Schildt, Sandra
    Roeder, Brigitte
    Hanganu-Opatz, Ileana L.
    SYNAPSE, 2017, 71 (06)
  • [9] Learning cross-modal visual-tactile representation using ensembled generative adversarial networks
    Li, Xinwu
    Liu, Huaping
    Zhou, Junfeng
    Sun, FuChun
    COGNITIVE COMPUTATION AND SYSTEMS, 2019, 1 (02) : 40 - 44
  • [10] Cross-modal integration of auditory and visual motion signals
    Meyer, GF
    Wuerger, SM
    NEUROREPORT, 2001, 12 (11) : 2557 - 2560