Textual sentiment analysis via three different attention convolutional neural networks and cross-modality consistent regression

被引:88
|
作者
Zhang, Zufan [1 ]
Zou, Yang [1 ]
Gan, Chenquan [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China
关键词
Textual sentiment analysis; Word embedding; Lexicon embedding; Attention mechanism; Cross-modality consistent regression;
D O I
10.1016/j.neucom.2017.09.080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Word embeddings and CNN (convolutional neural networks) architecture are crucial ingredients of sentiment analysis. However, sentiment and lexicon embeddings are rarely used and CNN is incompetent to capture global features of sentence. To this end, semantic embeddings, sentiment embeddings and lexicon embeddings are applied for texts encoding, and three different attentions including attention vector, LSTM (long short term memory) attention and attentive pooling are integrated with CNN model in this paper. Additionally, a word and its context are explored to disambiguate the meaning of the word for rich input representation. To improve the performance of three different attention CNN models, CCR (cross-modality consistent regression) and transfer learning are presented. It is worth noticing that CCR and transfer learning are used in textual sentiment analysis for the first time. Finally, some experiments on two different datasets demonstrate that the proposed attention CNN models achieve the best or the next-best results against the existing state-of-the-art models. (c) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:1407 / 1415
页数:9
相关论文
共 17 条
  • [1] Cross-modality Consistent Regression for Joint Visual-Textual Sentiment Analysis of Social Multimedia
    You, Quanzeng
    Luo, Jiebo
    Jin, Hailin
    Yang, Jianchao
    PROCEEDINGS OF THE NINTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'16), 2016, : 13 - 22
  • [2] Joint Visual-Textual Sentiment Analysis Based on Cross-Modality Attention Mechanism
    Zhu, Xuelin
    Cao, Biwei
    Xu, Shuai
    Liu, Bo
    Cao, Jiuxin
    MULTIMEDIA MODELING (MMM 2019), PT I, 2019, 11295 : 264 - 276
  • [3] Visual-Textual Sentiment Analysis Enhanced by Hierarchical Cross-Modality Interaction
    Zhou, Tao
    Cao, Jiuxin
    Zhu, Xuelin
    Liu, Bo
    Li, Shancang
    IEEE SYSTEMS JOURNAL, 2021, 15 (03): : 4303 - 4314
  • [4] Pedestrian Recognition Using Cross-Modality Learning in Convolutional Neural Networks
    Pop, Danut Ovidiu
    Rogozan, Alexandrina
    Nashashibi, Fawzi
    Bensrhair, Abdelaziz
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2021, 13 (01) : 210 - 224
  • [5] Multi-layer cross-modality attention fusion network for multimodal sentiment analysis
    Yin Z.
    Du Y.
    Liu Y.
    Wang Y.
    Multimedia Tools and Applications, 2024, 83 (21) : 60171 - 60187
  • [6] Cross-Modality Compensation Convolutional Neural Networks for RGB-D Action Recognition
    Cheng, Jun
    Ren, Ziliang
    Zhang, Qieshi
    Gao, Xiangyang
    Hao, Fusheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) : 1498 - 1509
  • [7] Attention Visualization of Gated Convolutional Neural Networks with Self Attention in Sentiment Analysis
    Yanagimto, Hidekazu
    Hashimoto, Kiyota
    Okada, Makoto
    2018 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND DATA ENGINEERING (ICMLDE 2018), 2018, : 77 - 82
  • [8] VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS
    Chen, Xingyue
    Wang, Yunhong
    Liu, Qingjie
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1557 - 1561
  • [9] Visual and Textual Sentiment Analysis of a Microblog Using Deep Convolutional Neural Networks
    Yu, Yuhai
    Lin, Hongfei
    Meng, Jiana
    Zhao, Zhehuan
    ALGORITHMS, 2016, 9 (02)
  • [10] Cross-modality earth mover’s distance-driven convolutional neural network for different-modality data
    Zheng Zuo
    Liang Liu
    Jiayong Liu
    Cheng Huang
    Neural Computing and Applications, 2020, 32 : 9581 - 9592