A Novel Semantic-Enhanced Text Graph Representation Learning Approach through Transformer Paradigm

被引:6
|
作者
Vo, Tham [1 ]
机构
[1] Thu Dau Mot Univ, Binh Duong, Vietnam
关键词
BERT; GCN; graph embedding; text graph transformer; ENSEMBLE;
D O I
10.1080/01969722.2022.2067632
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Among common tasks in natural language processing (NLP) domain, text classification is considered as an important primitive task which is widely applied in multiple disciplines. Recent advanced deep learning-based architectures such as sequence-to-sequence (seq2seq) with attention mechanism have demonstrated remarkable improvements in multiple NLP's tasks, including classification. However, recent seq2seq-based models still encounter challenges related to the limitation in effectively capturing long-range dependent relationships between words in a text corpus. Recent integrated graph neural network and textual graph transformer (TGT)-based models have demonstrated significant improvements in preserving the structural n-hop co-occurring relationships between words in a given text corpus. However, these models still suffer problems related to the thorough considerations on the sequential and contextual relations of words within a single document's graph. To meet these challenges, in this article we proposed a novel semantic-enhanced graph transformer-based textual representation learning approach, called as: SemTGT. Our proposed SemTGT can support to effectively learn both local rich-contextual and global long-range structural latent representations of texts for leveraging the performance of classification task. Extensive experiments in standard datasets demonstrate the effectiveness of our proposed SemTGT model in comparing with recent seq2seq-based and textual graph embedding-based baselines.
引用
收藏
页码:499 / 525
页数:27
相关论文
共 50 条
  • [41] A Model of Text-Enhanced Knowledge Graph Representation Learning with Collaborative Attention
    Wang, Yashen
    Zhang, Huanhuan
    Xie, Haiyong
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 236 - 251
  • [42] A Model of Text-Enhanced Knowledge Graph Representation Learning With Mutual Attention
    Wang, Yashen
    Zhang, Huanhuan
    Shi, Ge
    Liu, Zhirun
    Zhou, Qiang
    IEEE ACCESS, 2020, 8 : 52895 - 52905
  • [43] GLSEC: Global and local semantic-enhanced contrastive framework for knowledge graph completion
    Ma, Ruixin
    Wang, Xiaoru
    Cao, Cunxi
    Bu, Xiya
    Wu, Hao
    Zhao, Liang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 250
  • [44] Deep Semantic-Enhanced Event Detection via Symmetric Graph Convolutional Network
    Sun, Chenchen
    Zhuo, Xingrui
    Lu, Zhenya
    Bu, Chenyang
    Wu, Gongqing
    2022 IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG), 2022, : 241 - 248
  • [45] UGTransformer: Unsupervised Graph Transformer Representation Learning
    Xu, Lixiang
    Liu, Haifeng
    Cui, Qingzhe
    Luo, Bin
    Li, Ning
    Chen, Yan
    Tang, Yuanyan
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [46] Frame Semantic-Enhanced Sentence Modeling for Sentence-level Extractive Text Summarization
    Guan, Yong
    Guo, Shaoru
    Li, Ru
    Li, Xiaoli
    Tan, Hongye
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 4045 - 4052
  • [47] A dynamic graph representation learning based on temporal graph transformer
    Zhong, Ying
    Huang, Chenze
    ALEXANDRIA ENGINEERING JOURNAL, 2023, 63 : 359 - 369
  • [48] GraKerformer: A Transformer With Graph Kernel for Unsupervised Graph Representation Learning
    Xu, Lixiang
    Liu, Haifeng
    Yuan, Xin
    Chen, Enhong
    Tang, Yuanyan
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, : 7320 - 7332
  • [49] A dynamic graph representation learning based on temporal graph transformer
    Zhong, Ying
    Huang, Chenze
    ALEXANDRIA ENGINEERING JOURNAL, 2023, 63 : 359 - 369
  • [50] Semantic Hilbert Space for Text Representation Learning
    Wang, Benyou
    Li, Qiuchi
    Melucci, Massimo
    Song, Dawei
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 3293 - 3299