Speech recognition for command entry in multimodal interaction

被引:3
|
作者
Tyfa, DA [1 ]
Howes, M [1 ]
机构
[1] Univ Leeds, Sch Psychol, Leeds LS2 9JT, W Yorkshire, England
基金
英国工程与自然科学研究理事会;
关键词
speech recognition; multiple resources; multimodal interaction; command entry; hands-busy; eyes-busy; verbal interference;
D O I
10.1006/ijhc.1999.0355
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Two experiments investigated the cognitive efficiency of using speech recognition in combination with the mouse and keyboard for a range of word processing tasks. The first experiment examined the potential of this multimodal combination to increase performance by engaging concurrent multiple resources. Speech and mouse responses were compared when using menu and direct (toolbar icon) commands, making for a fairer comparison than in previous research which has been biased against the mouse. Only a limited basis for concurrent resource use was found, with speech leading to poorer task performance with both command types. Task completion times were faster with direct commands for both speech and mouse responses, and direct commands were preferred. In the second experiment, participants were free to choose command type, and nearly always chose to use direct commands with both response modes. Speech performance was again worse than mouse, except for tasks which involved a large amount of hand and eye movement, or where direct speech was used but mouse commands were made via menus. In both experiments recognition errors were low, and although they had some detrimental effect on speech use, problems in combining speech and manual modes were highlighted. Potential verbal interference effects when using speech are discussed. (C) 2000 Academic Press.
引用
收藏
页码:637 / 667
页数:31
相关论文
共 50 条
  • [41] A Speech to Machine Interface Based on the Frequency Domain Command Recognition
    Almayouf, Nojood
    Qaisar, S. M.
    Alharbi, Lojain
    Madani, Raghdah
    2017 IEEE 2ND INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP), 2017, : 356 - 360
  • [42] Multimodal Driver Interaction with Gesture, Gaze and Speech
    Aftab, Abdul Rafey
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 487 - 492
  • [43] Speech and Sketching: An Empirical Study of Multimodal Interaction
    Adler, A.
    Davis, R.
    SKETCH-BASED INTERFACES AND MODELING 2007, 2007, : 83 - 90
  • [44] Intelligent speech recognition for computerized physician order entry
    Chow, Yuen-Ho
    Quek, Hui Nar
    Purnadi, Peter Honggowibowo
    Chia, Patrick
    WMSCI 2005: 9th World Multi-Conference on Systemics, Cybernetics and Informatics, Vol 6, 2005, : 138 - 141
  • [45] Speech recognition and direct data entry in clinical microbiology
    OHara, SP
    Athersuch, R
    BRITISH JOURNAL OF BIOMEDICAL SCIENCE, 1996, 53 (03) : 209 - 213
  • [46] A Multimodal Asynchronous Human-Machine Interaction Method Based on Electrooculography and Speech Recognition for Wheelchair Control
    Li, Kendi
    Chen, Di
    Rao, Zuguang
    Guan, Zijing
    Jiang, Ya
    Li, Yuanqing
    IEEE SENSORS JOURNAL, 2024, 24 (23) : 39195 - 39205
  • [47] Multimodal interaction techniques for situational awareness and command of robotic combat entities
    Neely, HE
    Belvin, RS
    Fox, JR
    Daily, MJ
    2004 IEEE AEROSPACE CONFERENCE PROCEEDINGS, VOLS 1-6, 2004, : 3297 - 3305
  • [48] Multimodal Data Fusion Architectures in Audiovisual Speech Recognition
    Sayed, Hadeer M.
    ElDeeb, Hesham E.
    Taiel, Shereen A.
    INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 2, WORLDCIST 2023, 2024, 800 : 655 - 667
  • [49] Real time face detection for multimodal speech recognition
    Murai, K
    Nakamura, S
    IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOL I AND II, PROCEEDINGS, 2002, : A373 - A376
  • [50] Multimodal emotion recognition based on speech and ECG signals
    Huang C.
    Jin Y.
    Wang Q.
    Zhao L.
    Zou C.
    Dongnan Daxue Xuebao (Ziran Kexue Ban)/Journal of Southeast University (Natural Science Edition), 2010, 40 (05): : 895 - 900