Multi-modal multi-view Bayesian semantic embedding for community question answering

被引:15
|
作者
Sang, Lei [1 ,2 ]
Xu, Min [2 ]
Qian, ShengSheng [3 ]
Wu, Xindong [4 ]
机构
[1] Hefei Univ Technol, Hefei, Anhui, Peoples R China
[2] Univ Technol Sydney, Sydney, NSW, Australia
[3] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
[4] Univ Louisiana Lafayette, Lafayette, LA 70504 USA
关键词
Community question answering; Semantic embedding; Multi-modal; Multi-view; Topic model; Word embedding;
D O I
10.1016/j.neucom.2018.12.067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semantic embedding has demonstrated its value in latent representation learning of data, and can be effectively adopted for many applications. However, it is difficult to propose a joint learning framework for semantic embedding in Community Question Answer (CQA), because CQA data have multi-view and sparse properties. In this paper, we propose a generic Multi-modal Multi-view Semantic Embedding (MMSE) framework via a Bayesian model for question answering. Compared with existing semantic learning methods, the proposed model mainly has two advantages: (1) To deal with the multi-view property, we utilize the Gaussian topic model to learn semantic embedding from both local view and global view. (2) To deal with the sparse property of question answer pairs in CQA, social structure information is incorporated to enhance the quality of general text content semantic embedding from other answers by using the shared topic distribution to model the relationship between these two modalities (user relationship and text content). We evaluate our model for question answering and expert finding task, and the experimental results on two real-world datasets show the effectiveness of our MMSE model for semantic embedding learning. (C) 2018 Published by Elsevier B.V.
引用
收藏
页码:44 / 58
页数:15
相关论文
共 50 条
  • [41] Multi-Modal Knowledge-Aware Attention Network for Question Answering
    Zhang Y.
    Qian S.
    Fang Q.
    Xu C.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (05): : 1037 - 1045
  • [42] MV-BART: Multi-view BART for Multi-modal Sarcasm Detection
    Zhuang, Xingjie
    Zhou, Fengling
    Li, Zhixin
    PROCEEDINGS OF THE 33RD ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2024, 2024, : 3602 - 3611
  • [43] A Multi-modal & Multi-view & Interactive Benchmark Dataset for Human Action Recognition
    Xu, Ning
    Liu, Anan
    Nie, Weizhi
    Wong, Yongkang
    Li, Fuwu
    Su, Yuting
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 1195 - 1198
  • [44] Multi-modal and multi-view image dataset for weeds detection in wheat field
    Xu, Ke
    Jiang, Zhijian
    Liu, Qihang
    Xie, Qi
    Zhu, Yan
    Cao, Weixing
    Ni, Jun
    FRONTIERS IN PLANT SCIENCE, 2022, 13
  • [45] Multi-view Network Embedding with Structure and Semantic Contrastive Learning
    Shang, Yifan
    Ye, Xiucai
    Sakurai, Tetsuya
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 870 - 875
  • [46] Deep multi-view document clustering with enhanced semantic embedding
    Bai, Ruina
    Huang, Ruizhang
    Chen, Yanping
    Qin, Yongbin
    INFORMATION SCIENCES, 2021, 564 : 273 - 287
  • [47] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    World Wide Web, 2022, 25 (04) : 1607 - 1623
  • [48] Multi-level, multi-modal interactions for visual question answering over text in images
    Jincai Chen
    Sheng Zhang
    Jiangfeng Zeng
    Fuhao Zou
    Yuan-Fang Li
    Tao Liu
    Ping Lu
    World Wide Web, 2022, 25 : 1607 - 1623
  • [49] MVARN: Multi-view Attention Relation Network for Figure Question Answering
    Wang, Yingdong
    Wu, Qingfeng
    Lin, Weiqiang
    Ma, Linjian
    Li, Ying
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2023, 2023, 14119 : 30 - 38
  • [50] Multi-modal Multi-scale State Space Model for Medical Visual Question Answering
    Chen, Qishen
    Bian, Minjie
    He, Wenxuan
    Xu, Huahu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VIII, 2024, 15023 : 328 - 342