Constraining user response via multimodal dialog interface

被引:3
|
作者
Baker K. [1 ]
Mckenzie A. [2 ]
Biermann A. [2 ]
Webelhuth G. [3 ]
机构
[1] Linguistics Department, Ohio State University, 222 Oxley Hall, Columbus, OH 43210-1298
[2] Department of Computer Science, Duke University, Durham, NC 27708-0129
[3] Sem. für Englische Philologie, Georg-August-Univ. Göttingen, 37073 Göttingen
关键词
Constrain user response; Multimodal dialog interface; Speech recognition;
D O I
10.1023/B:IJST.0000037069.82313.57
中图分类号
学科分类号
摘要
This paper presents the results of an experiment comparing two different designs of an automated dialog interface. We compare a multimodal design utilizing text displays coordinated with spoken prompts to a voice-only version of the same application. Our results show that the text-coordinated version is more efficient in terms of word recognition and number of out-of-grammar responses, and is equal to the voice-only version in terms of user satisfaction. We argue that this type of multimodal dialog interface effectively constrains user response to allow for better speech recognition without increasing cognitive load or compromising the naturalness of the interaction.
引用
收藏
页码:251 / 258
页数:7
相关论文
共 50 条
  • [21] Glossa: A multilingual, multimodal, configurable user interface
    Nygaard, Lars
    Priestley, Joel
    Noklestad, Anders
    Johannessen, Janne Bondi
    SIXTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, LREC 2008, 2008, : 617 - 621
  • [22] Information enquiry kiosk with multimodal user interface
    Karpov A.A.
    Ronzhin A.L.
    Pattern Recognition and Image Analysis, 2009, 19 (03) : 546 - 558
  • [23] A multimodal user interface for geoscientific data investigation
    Harding, C
    Kakadiaris, I
    Loftin, RB
    ADVANCES IN MULTIMODAL INTERFACES - ICMI 2000, PROCEEDINGS, 2000, 1948 : 615 - 623
  • [25] An User Interface Dialog Control Model Based on UI Patterns
    Yu, Kai
    Hua, QingYi
    Wang, ShaSha
    Li, NanNan
    Zhang, YuChen
    PROCEEDINGS OF 2015 6TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE, 2015, : 702 - 705
  • [26] ETUDE, a recursive dialog manager with embedded user interface patterns
    Pieraccini, R
    Caskey, S
    Dayanidhi, K
    Carpenter, B
    Phillips, M
    ASRU 2001: IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, CONFERENCE PROCEEDINGS, 2001, : 244 - 247
  • [27] A haptic thermal interface: Towards effective multimodal user interface systems
    Nam, Chang S.
    Di, Jia
    Borsodi, Liam W.
    Mackay, William
    PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION, 2005, : 13 - 18
  • [28] Modeling user response timings in spoken dialog systems
    Witt, Silke
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2015, 18 (02) : 231 - 243
  • [29] Multimodal Dialog System: Generating Responses via Adaptive Decoders
    Nie, Liqiang
    Wang, Wenjie
    Hong, Richang
    Wang, Meng
    Tian, Qi
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1098 - 1106
  • [30] Usability evaluation of collaborative applications with multimodal user interface
    Sanchez Morales, Gabriela
    Mezura-Godoy, Carmen
    Reyes Flores, Itzel Alessandra
    Benitez-Guerrero, Edgard
    2017 6TH INTERNATIONAL CONFERENCE ON SOFTWARE PROCESS IMPROVEMENT (CIMPS), 2017,