Personalized Showcases: Generating Multi-Modal Explanations for Recommendations

被引:4
|
作者
Yan, An [1 ]
He, Zhankui [1 ]
Li, Jiacheng [1 ]
Zhang, Tianyang [1 ]
McAuley, Julian [1 ]
机构
[1] Univ Calif San Diego, La Jolla, CA USA
关键词
Datasets; Text Generation; Multi-Modality; Contrastive Learning;
D O I
10.1145/3539618.3592036
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing explanation models generate only text for recommendations but still struggle to produce diverse contents. In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations. Specifically, we first select a personalized image set that is the most relevant to a user's interest toward a recommended item. Then, natural language explanations are generated accordingly given our selected images. For this new task, we collect a large-scale dataset from Google Maps and construct a high-quality subset for generating multi-modal explanations. We propose a personalized multi-modal framework which can generate diverse and visually-aligned explanations via contrastive learning. Experiments show that our framework benefits from different modalities as inputs, and is able to produce more diverse and expressive explanations compared to previous methods on a variety of evaluation metrics. (1)
引用
收藏
页码:2251 / 2255
页数:5
相关论文
共 50 条
  • [1] How people produce understandable multi-modal explanations
    Engle, RA
    PROCEEDINGS OF THE NINETEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, 1997, : 909 - 909
  • [2] Personalized Multi-modal Video Retrieval on Mobile Devices
    Zhang, Haotian
    Jepson, Allan D.
    Mohomed, Iqbal
    Derpanis, Konstantinos G.
    Zhang, Ran
    Fazly, Afsaneh
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1185 - 1191
  • [3] An Algorithm for Generating a Diverse Set of Multi-Modal Journeys
    Mosquera, Federico
    Smet, Pieter
    Vanden Berghe, Greet
    ALGORITHMS, 2022, 15 (11)
  • [4] Experimental Study on Generating Multi-modal Explanations of Black-box Classifiers in terms of Gray-box Classifiers
    Alonso, Jose M.
    Toja-Alamancos, J.
    Bugarin, A.
    2020 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2020,
  • [5] Personalized Context-Aware Multi-Modal Transportation Recommendation
    Chen, Xianda
    Zhu, Meixin
    Tiu, PakHin
    Wang, Yinhai
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 3276 - 3281
  • [6] Simulating Adaptive, Personalized, Multi-modal Mobility in Smart Cities
    Poxrucker, Andreas
    Bahle, Gernot
    Lukowicz, Paul
    SMART CITY 360, 2016, 166 : 113 - 124
  • [7] Personalized clothing matching recommendation based on multi-modal fusion
    Liu J.
    Zhang F.
    Hu X.
    Peng T.
    Li L.
    Zhu Q.
    Zhang J.
    Fangzhi Xuebao/Journal of Textile Research, 2023, 44 (03): : 176 - 186
  • [8] Generating information for small data sets with a multi-modal distribution
    Li, Der-Chiang
    Lin, Liang-Sian
    DECISION SUPPORT SYSTEMS, 2014, 66 : 71 - 81
  • [9] Requirements for automatically generating multi-modal interfaces for complex appliances
    Nichols, J
    Myers, B
    Higgins, K
    Rosenfeld, R
    Shriver, S
    Higgins, M
    Hughes, J
    FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS, 2002, : 377 - 382
  • [10] Multi-modal Knowledge Communication: How Museum Visitors find out about Museum Showcases
    Kesselheim, Wolfgang
    FACHSPRACHE-JOURNAL OF PROFESSIONAL AND SCIENTIFIC COMMUNICATION, 2010, 32 (3-4): : 122 - 144