Multi-modal multi-view Bayesian semantic embedding for community question answering

被引:15
|
作者
Sang, Lei [1 ,2 ]
Xu, Min [2 ]
Qian, ShengSheng [3 ]
Wu, Xindong [4 ]
机构
[1] Hefei Univ Technol, Hefei, Anhui, Peoples R China
[2] Univ Technol Sydney, Sydney, NSW, Australia
[3] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
[4] Univ Louisiana Lafayette, Lafayette, LA 70504 USA
关键词
Community question answering; Semantic embedding; Multi-modal; Multi-view; Topic model; Word embedding;
D O I
10.1016/j.neucom.2018.12.067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semantic embedding has demonstrated its value in latent representation learning of data, and can be effectively adopted for many applications. However, it is difficult to propose a joint learning framework for semantic embedding in Community Question Answer (CQA), because CQA data have multi-view and sparse properties. In this paper, we propose a generic Multi-modal Multi-view Semantic Embedding (MMSE) framework via a Bayesian model for question answering. Compared with existing semantic learning methods, the proposed model mainly has two advantages: (1) To deal with the multi-view property, we utilize the Gaussian topic model to learn semantic embedding from both local view and global view. (2) To deal with the sparse property of question answer pairs in CQA, social structure information is incorporated to enhance the quality of general text content semantic embedding from other answers by using the shared topic distribution to model the relationship between these two modalities (user relationship and text content). We evaluate our model for question answering and expert finding task, and the experimental results on two real-world datasets show the effectiveness of our MMSE model for semantic embedding learning. (C) 2018 Published by Elsevier B.V.
引用
收藏
页码:44 / 58
页数:15
相关论文
共 50 条
  • [1] Multi-View Multi-Modal Feature Embedding for Endomicroscopy Mosaic Classification
    Gu, Yun
    Yang, Jie
    Yang, Guang-Zhong
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 1315 - 1323
  • [2] Multi-view Semantic Reasoning Networks for Multi-hop Question Answering
    Long X.
    Zhao R.
    Sun J.
    Ju S.
    Gongcheng Kexue Yu Jishu/Advanced Engineering Sciences, 2023, 55 (02): : 285 - 297
  • [3] Interactive Multi-Modal Question-Answering
    Orasan, Constantin
    COMPUTATIONAL LINGUISTICS, 2012, 38 (02) : 451 - 453
  • [4] MoQA - A Multi-modal Question Answering Architecture
    Haurilet, Monica
    Al-Halah, Ziad
    Stiefelhagen, Rainer
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT IV, 2019, 11132 : 106 - 113
  • [5] An approach to multi-modal multi-view video coding
    Zhang, Yun
    Jiang, Gangyi
    Yi, Wenjuan
    Yu, Mei
    Jiang, Zhidi
    Kim, Yong Deak
    2006 8TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, VOLS 1-4, 2006, : 1405 - +
  • [6] A Survey of Multi-modal Question Answering Systems for Robotics
    Liu, Xiaomeng
    Long, Fei
    2017 2ND INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM), 2017, : 189 - 194
  • [7] Temporally Multi-Modal Semantic Reasoning with Spatial Language Constraints for Video Question Answering
    Liu, Mingyang
    Wang, Ruomei
    Zhou, Fan
    Lin, Ge
    SYMMETRY-BASEL, 2022, 14 (06):
  • [8] Knowledge-Based Visual Question Answering Using Multi-Modal Semantic Graph
    Jiang, Lei
    Meng, Zuqiang
    ELECTRONICS, 2023, 12 (06)
  • [9] Multi-view approach to suggest moderation actions in community question answering sites
    Annamoradnejad, Issa
    Habibi, Jafar
    Fazli, Mohammadamin
    INFORMATION SCIENCES, 2022, 600 : 144 - 154
  • [10] A multi-view approach to multi-modal MRI cluster ensembles
    Mendez, Carlos Andres
    Summers, Paul
    Menegaz, Gloria
    MEDICAL IMAGING 2014: IMAGE PROCESSING, 2014, 9034