Understanding How People with Limited Mobility Use Multi-Modal Input

被引:11
|
作者
Wentzel, Johann [1 ]
Junuzovic, Sasa [2 ]
Devine, James [3 ]
Porter, John R. [4 ]
Mott, Martez E. [2 ]
机构
[1] Univ Waterloo, Waterloo, ON, Canada
[2] Microsoft Res, Redmond, WA USA
[3] Microsoft Res, Cambridge, England
[4] Microsoft, Redmond, WA USA
关键词
D O I
10.1145/3491102.3517458
中图分类号
学科分类号
摘要
People with limited mobility often use multiple devices when interacting with computing systems, but little is known about the impact these multi-modal configurations have on daily computing use. A deeper understanding of the practices, preferences, obstacles, and workarounds associated with accessible multi-modal input can uncover opportunities to create more accessible computer applications and hardware. We explored how people with limited mobility use multi-modality through a three-part investigation grounded in the context of video games. First, we surveyed 43 people to learn about their preferred devices and configurations. Next, we conducted semi-structured interviews with 14 participants to understand their experiences and challenges with using, configuring, and discovering input setups. Lastly, we performed a systematic review of 74 YouTube videos to illustrate and categorize input setups and adaptations in-situ. We conclude with a discussion on how our findings can inform future accessibility research for current and emerging computing technologies.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] How people produce understandable multi-modal explanations
    Engle, RA
    PROCEEDINGS OF THE NINETEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, 1997, : 909 - 909
  • [2] Understanding human mobility: A multi-modal and intelligent moving objects database
    Xu, Jianqiu
    Lu, Hua
    Gueting, Ralf Hartmut
    SSTD '19 - PROCEEDINGS OF THE 16TH INTERNATIONAL SYMPOSIUM ON SPATIAL AND TEMPORAL DATABASES, 2019, : 222 - 225
  • [3] How to Read Paintings: Semantic Art Understanding with Multi-modal Retrieval
    Garcia, Noa
    Vogiatzis, George
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT II, 2019, 11130 : 676 - 691
  • [4] Adaptive Multi-Modal Fusion Framework for Activity Monitoring of People With Mobility Disability
    Lin, Fang
    Wang, Zhelong
    Zhao, Hongyu
    Qiu, Sen
    Shi, Xin
    Wu, Lina
    Gravina, Raffaele
    Fortino, Giancarlo
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (08) : 4314 - 4324
  • [5] Multi-modal fusion for video understanding
    Hoogs, A
    Mundy, J
    Cross, G
    30TH APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, PROCEEDINGS: ANALYSIS AND UNDERSTANDING OF TIME VARYING IMAGERY, 2001, : 103 - 108
  • [6] HeadControl+:: A multi-modal input device
    Mühlehner, M
    Miesenberger, K
    COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS: PROCEEDINGS, 2004, 3118 : 774 - 781
  • [7] Meme Generation with Multi-modal Input and Planning
    Ranjan, Ashutosh
    Srivastava, Vivek
    Khatri, Jyotsana
    Bhat, Savita
    Karande, Shirish
    PROCEEDINGS OF THE 2ND INTERNATIONAL WORKSHOP ON DEEP MULTIMODAL GENERATION AND RETRIEVAL, MMGR 2024, 2024, : 21 - +
  • [8] Multi-Modal Movement: Interaction and Mobility Reviewed
    Jenkings, K. Neil
    SYMBOLIC INTERACTION, 2014, 37 (02) : 315 - 317
  • [9] A Multi-modal System for Video Semantic Understanding
    Lv, Zhengwei
    Lei, Tao
    Liang, Xiao
    Shi, Zhizhong
    Liu, Duoxing
    CCKS 2021 - EVALUATION TRACK, 2022, 1553 : 34 - 43
  • [10] Multi-modal language input: A learned superadditive effect
    Cheetham, Dominic
    APPLIED LINGUISTICS REVIEW, 2019, 10 (02) : 179 - 200