Understanding How People with Limited Mobility Use Multi-Modal Input

被引:11
|
作者
Wentzel, Johann [1 ]
Junuzovic, Sasa [2 ]
Devine, James [3 ]
Porter, John R. [4 ]
Mott, Martez E. [2 ]
机构
[1] Univ Waterloo, Waterloo, ON, Canada
[2] Microsoft Res, Redmond, WA USA
[3] Microsoft Res, Cambridge, England
[4] Microsoft, Redmond, WA USA
来源
PROCEEDINGS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI' 22) | 2022年
关键词
D O I
10.1145/3491102.3517458
中图分类号
学科分类号
摘要
People with limited mobility often use multiple devices when interacting with computing systems, but little is known about the impact these multi-modal configurations have on daily computing use. A deeper understanding of the practices, preferences, obstacles, and workarounds associated with accessible multi-modal input can uncover opportunities to create more accessible computer applications and hardware. We explored how people with limited mobility use multi-modality through a three-part investigation grounded in the context of video games. First, we surveyed 43 people to learn about their preferred devices and configurations. Next, we conducted semi-structured interviews with 14 participants to understand their experiences and challenges with using, configuring, and discovering input setups. Lastly, we performed a systematic review of 74 YouTube videos to illustrate and categorize input setups and adaptations in-situ. We conclude with a discussion on how our findings can inform future accessibility research for current and emerging computing technologies.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Contribution of multi-modal imaging to our understanding of dystonia pathogenesis
    Claire MacIver
    Kathryn Peall
    Journal of Neurology, 2021, 268 : 3043 - 3045
  • [32] Understanding Fun in Learning to Code: A Multi-Modal Data approach
    Tisza, Gabriella
    Sharma, Kshitij
    Papavlasopoulou, Sofia
    Markopoulos, Panos
    Giannakos, Michail
    PROCEEDINGS OF THE 2022 ACM INTERACTION DESIGN AND CHILDREN, IDC 2022, 2022, : 274 - 287
  • [33] MULTI-MODAL REPRESENTATION LEARNING FOR SHORT VIDEO UNDERSTANDING AND RECOMMENDATION
    Guo, Daya
    Hong, Jiangshui
    Luo, Binli
    Yan, Qirui
    Niu, Zhangming
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 687 - 690
  • [34] A Multi-modal Approach to Understanding Degradation of Organic Photovoltaic Materials
    Anderson, Michael A.
    Larson, Bryon W.
    Ratcliff, Erin L.
    ACS APPLIED MATERIALS & INTERFACES, 2021, 13 (37) : 44641 - 44655
  • [35] Deep Video Understanding with a Unified Multi-Modal Retrieval Framework
    Xie, Chen-Wei
    Sun, Siyang
    Zhao, Liming
    Wu, Jianmin
    Li, Dangwei
    Zheng, Yun
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 7055 - 7059
  • [36] Principle-to-program: Neural Fashion Recommendation with Multi-modal Input
    Chelliah, Muthusamy
    Biswas, Soma
    Dhakad, Lucky
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2706 - 2708
  • [37] Photo-Realistic Emoticon Generation Using Multi-Modal Input
    Mittal, Paritosh
    Aggarwal, Kunal
    Sahu, Pragya Paramita
    Vatsalya, Vishal
    Mitra, Soumyajit
    Singh, Vikrant
    Veera, Viswanath
    Venkatesan, Shankar M.
    PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2020, 2020, : 254 - 258
  • [38] Multi-modal distraction: Insights from children's limited attention
    Matusz, Pawel J.
    Broadbent, Hannah
    Ferrari, Jessica
    Forrest, Benjamin
    Merkley, Rebecca
    Scerif, Gaia
    COGNITION, 2015, 136 : 156 - 165
  • [39] Multi-modal People Detection from Aerial Video Footage
    Flynn, Helen
    Cameron, Stephen
    TOWARDS AUTONOMOUS ROBOTIC SYSTEMS, 2014, 8069 : 190 - 191
  • [40] Understanding Multi-stage, Multi-modal, Multimedia events in Social Media
    Kagan, Vadim
    Subrahmanian, V. S.
    3RD INTERNATIONAL WORKSHOP ON SOCIAL SENSING (SOCIALSENS 2018), 2018, : 4 - 4