DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder

被引:10
|
作者
Zhang, Zhenyu [1 ]
Guo, Tao [2 ]
Chen, Meng [3 ]
机构
[1] JD AI, Chengdu, Peoples R China
[2] Xiaoduo AI, Chengdu, Peoples R China
[3] JD AI, Beijing, Peoples R China
关键词
Dialogue Pre-training Model; Dialogue Representation; Intent Recognition; Emotion Recognition; Named Entity Recognition;
D O I
10.1145/3459637.3482085
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of artificial intelligence, conversational bots have became prevalent in mainstream E-commerce platforms, which can provide convenient customer service timely. To satisfy the user, the conversational bots need to understand the user's intention, detect the user's emotion, and extract the key entities from the conversational utterances. However, understanding dialogues is regarded as a very challenging task. Different from common language understanding, utterances in dialogues appear alternately from different roles and are usually organized as hierarchical structures. To facilitate the understanding of dialogues, in this paper, we propose a novel contextual dialogue encoder (i.e. DialogueBERT) based on the popular pre-trained language model BERT. Five self-supervised learning pre-training tasks are devised for learning the particularity of dialouge utterances. Four different input embeddings are integrated to catch the relationship between utterances, including turn embedding, role embedding, token embedding and position embedding. DialogueBERT was pre-trained with 70 million dialogues in real scenario, and then fine-tuned in three different downstream dialogue understanding tasks. Experimental results show that DialogueBERT achieves exciting results with 88.63% accuracy for intent recognition, 94.25% accuracy for emotion recognition and 97.04% F1 score for named entity recognition, which outperforms several strong baselines by a large margin.
引用
收藏
页码:3647 / 3651
页数:5
相关论文
共 50 条
  • [21] Representation Recovering for Self-Supervised Pre-training on Medical Images
    Yan, Xiangyi
    Naushad, Junayed
    Sun, Shanlin
    Han, Kun
    Tang, Hao
    Kong, Deying
    Ma, Haoyu
    You, Chenyu
    Xie, Xiaohui
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2684 - 2694
  • [22] Reducing Domain mismatch in Self-supervised speech pre-training
    Baskar, Murali Karthick
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Zhang, Yu
    INTERSPEECH 2022, 2022, : 3028 - 3032
  • [23] Self-supervised VICReg pre-training for Brugada ECG detection
    Ronan, Robert
    Tarabanis, Constantine
    Chinitz, Larry
    Jankelson, Lior
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [24] A Self-Supervised Pre-Training Method for Chinese Spelling Correction
    Su J.
    Yu S.
    Hong X.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2023, 51 (09): : 90 - 98
  • [25] Self-supervised pre-training on industrial time-series
    Biggio, Luca
    Kastanis, Iason
    2021 8TH SWISS CONFERENCE ON DATA SCIENCE, SDS, 2021, : 56 - 57
  • [26] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635
  • [27] DiT: Self-supervised Pre-training for Document Image Transformer
    Li, Junlong
    Xu, Yiheng
    Lv, Tengchao
    Cui, Lei
    Zhang, Cha
    Wei, Furu
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3530 - 3539
  • [28] A SELF-SUPERVISED PRE-TRAINING FRAMEWORK FOR VISION-BASED SEIZURE CLASSIFICATION
    Hou, Jen-Cheng
    McGonigal, Aileen
    Bartolomei, Fabrice
    Thonnat, Monique
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1151 - 1155
  • [29] Masked Feature Prediction for Self-Supervised Visual Pre-Training
    Wei, Chen
    Fan, Haoqi
    Xie, Saining
    Wu, Chao-Yuan
    Yuille, Alan
    Feichtenhofer, Christoph
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14648 - 14658
  • [30] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371