Multi-Modality Behavioral Influence Analysis for Personalized Recommendations in Health Social Media Environment

被引:113
|
作者
Zhou, Xiaokang [1 ,2 ]
Liang, Wei [3 ]
Wang, Kevin I-Kai [4 ]
Shimizu, Shohei [1 ,2 ]
机构
[1] Shiga Univ, Fac Data Sci, Hikone 5228522, Japan
[2] RIKEN, Ctr Adv Intelligence Project AIP, Tokyo 1030027, Japan
[3] Hunan Univ Commerce, Key Lab Hunan Prov New Retail Virtual Real Techn, Changsha 410008, Hunan, Peoples R China
[4] Univ Auckland, Dept Elect Comp & Software Engn, Auckland 1010, New Zealand
基金
国家重点研发计划;
关键词
Behavioral analysis; health social media; neural networks; personalized recommendation; social influence; PREDICTION; INTERNET; THINGS;
D O I
10.1109/TCSS.2019.2918285
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, health social media have engaged more and more people to share their personal feelings, opinions, and experience in the context of health informatics, which has drawn increasing attention from both academia and industry. In this paper, we focus on the behavioral influence analysis based on heterogeneous health data generated in social media environments. An integrated deep neural network (DNN)-based learning model is designed to analyze and describe the latent behavioral influence hidden across multiple modalities, in which a convolutional neural network (CNN)-based framework is used to extract the time-series features within a certain social context. The learned features based on cross-modality influence analysis are then trained in a SoftMax classifier, which can result in a restructured representation of high-level features for online physician rating and classification in a data-driven way. Finally, two algorithms within two representative application scenarios are developed to provide patients with personalized recommendations in health social media environments. Experiments using the real world data demonstrate the effectiveness of our proposed model and method.
引用
收藏
页码:888 / 897
页数:10
相关论文
共 50 条
  • [11] A Cascaded Multi-modality Analysis in Mild Cognitive Impairment
    Zhang, Lu
    Zaman, Akib
    Wang, Li
    Yan, Jingwen
    Zhu, Dajiang
    MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2019), 2019, 11861 : 557 - 565
  • [12] Media Studies Futures: Whiteness, Indigeneity, Multi-modality, and a Politics of Possibility
    Henderson, Lisa
    TELEVISION & NEW MEDIA, 2020, 21 (06) : 581 - 589
  • [13] Influence of multi-modality on moving target selection in virtual reality
    LI Y.
    WU D.
    HUANG J.
    TIAN F.
    WANG H.
    DAI G.
    Virtual Reality and Intelligent Hardware, 2019, 1 (03): : 303 - 315
  • [14] Multi-modality brachytherapy robot auxiliary system in mixed reality environment
    Wang, Kairui
    Bi, Tiantian
    Zhang, Jiyong
    Zhang, Yongde
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 : 47 - 48
  • [15] Development of and integrated multi-modality cardiac review and conferencing digital environment
    Ratib, O
    Allada, V
    Hunt, K
    Dahlbom, M
    Wood, M
    MEDICAL IMAGING 2001: PACS AND INTEGRATED MEDICAL INFORMATION SYSTEMS: DESIGN AND EVALUATION, 2001, 4323 : 254 - 256
  • [16] Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom
    Lee, Y.
    Fullerton, G.
    Goins, B.
    MEDICAL PHYSICS, 2015, 42 (06) : 3261 - 3261
  • [17] Multi-modality imaging data analysis with partial least squares
    Chau, W
    Habib, R
    McIntosh, AR
    BRAIN AND COGNITION, 2004, 54 (02) : 140 - 142
  • [18] Cross-media retrieval via fusing multi-modality and multi-grained data
    Liu, Z.
    Yuan, S.
    Pei, X.
    Gao, S.
    Han, H.
    SCIENTIA IRANICA, 2023, 30 (05) : 1645 - 1669
  • [19] Explainable multi-task learning for multi-modality biological data analysis
    Tang, Xin
    Zhang, Jiawei
    He, Yichun
    Zhang, Xinhe
    Lin, Zuwan
    Partarrieu, Sebastian
    Hanna, Emma Bou
    Ren, Zhaolin
    Shen, Hao
    Yang, Yuhong
    Wang, Xiao
    Li, Na
    Ding, Jie
    Liu, Jia
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [20] Explainable multi-task learning for multi-modality biological data analysis
    Xin Tang
    Jiawei Zhang
    Yichun He
    Xinhe Zhang
    Zuwan Lin
    Sebastian Partarrieu
    Emma Bou Hanna
    Zhaolin Ren
    Hao Shen
    Yuhong Yang
    Xiao Wang
    Na Li
    Jie Ding
    Jia Liu
    Nature Communications, 14 (1)