Development of an Adaptive User Support System Based on Multimodal Large Language Models

被引:0
|
作者
Wang, Wei [1 ]
Li, Lin [2 ]
Wickramathilaka, Shavindra [1 ]
Grundy, John [1 ]
Khalajzadeh, Hourieh [3 ]
Obie, Humphrey O. [1 ]
Madugalla, Anuradha [1 ]
机构
[1] Monash Univ, Dept Software Syst & Cybersecur, Melbourne, Vic, Australia
[2] RMIT Univ, Dept Informat Syst & Business Analyt, Melbourne, Vic, Australia
[3] Deakin Univ, Sch Informat Technol, Melbourne, Vic, Australia
关键词
Adaptive User Support; User Interface; Multimodal Large Language Models (MLLMs);
D O I
10.1109/VL/HCC60511.2024.00044
中图分类号
学科分类号
摘要
As software systems become more complex, some users find it challenging to use these tools efficiently, leading to frustration and decreased productivity. We tackle the shortcomings of conventional user support mechanisms in software and aim to create and assess a user support system that integrates Multimodal Large Language Models (MLLMs) for producing support messages. Our system initially segments the user interface to serve as a reference for selection and requests users to specify their preferences for support messages. Following this, the system creates personalised user support messages for each individual. We propose that user support systems enhanced with MLLMs can provide more efficient and bespoke assistance compared to conventional methods.
引用
收藏
页码:344 / 347
页数:4
相关论文
共 50 条
  • [41] Exploring the Transferability of Visual Prompting for Multimodal Large Language Models
    Zhang, Yichi
    Dong, Yinpeng
    Zhang, Siyuan
    Min, Tianzan
    Su, Hang
    Zhu, Jun
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 26552 - 26562
  • [42] Enhancing Urban Walkability Assessment with Multimodal Large Language Models
    Blecic, Ivan
    Saiu, Valeria
    Trunfio, Giuseppe A.
    COMPUTATIONAL SCIENCE AND ITS APPLICATIONS-ICCSA 2024 WORKSHOPS, PT V, 2024, 14819 : 394 - 411
  • [43] Large Language Models Empower Multimodal Integrated Sensing and Communication
    Cheng, Lu
    Zhang, Hongliang
    Di, Boya
    Niyato, Dusit
    Song, Lingyang
    IEEE COMMUNICATIONS MAGAZINE, 2025,
  • [44] UniCode: Learning a Unified Codebook for Multimodal Large Language Models
    Zheng, Sipeng
    Zhou, Bohan
    Feng, Yicheng
    Wang, Ye
    Lu, Zongqing
    COMPUTER VISION - ECCV 2024, PT VIII, 2025, 15066 : 426 - 443
  • [45] QueryMintAI: Multipurpose Multimodal Large Language Models for Personal Data
    Ghosh, Ananya
    Deepa, K.
    IEEE ACCESS, 2024, 12 : 144631 - 144651
  • [46] BLINK: Multimodal Large Language Models Can See but Not Perceive
    Fu, Xingyu
    Hu, Yushi
    Li, Bangzheng
    Feng, Yu
    Wang, Haoyu
    Lin, Xudong
    Roth, Dan
    Smith, Noah A.
    Ma, Wei-Chiu
    Krishna, Ranjay
    COMPUTER VISION - ECCV 2024, PT XXIII, 2025, 15081 : 148 - 166
  • [47] Multimodal Large Language Models as Built Environment Auditing Tools
    Jang, Kee Moon
    Kim, Junghwan
    PROFESSIONAL GEOGRAPHER, 2025, 77 (01): : 84 - 90
  • [48] Exploring and Characterizing Large Language Models for Embedded System Development and Debugging
    Englhardt, Zachary
    Li, Richard
    Nissanka, Dilini
    Zhang, Zhihan
    Narayanswamy, Girish
    Breda, Joseph
    Liu, Xin
    Patel, Shwetak
    Iyer, Vikram
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [49] Development Support of User Interfaces Adaptive to Use Environment
    Tanaka, Seika
    Iwata, Hajime
    Shirogane, Junko
    Fukazawa, Yoshiaki
    2019 8TH INTERNATIONAL CONFERENCE ON SOFTWARE AND COMPUTER APPLICATIONS (ICSCA 2019), 2019, : 223 - 228
  • [50] Exploring Multimodal Large Language Models ChatGPT-4 and Bard for Visual Complexity Evaluation of Mobile User Interfaces
    Akca, Eren
    Tanriover, Omer Ozgur
    TRAITEMENT DU SIGNAL, 2024, 41 (05) : 2673 - 2681