Combining Self-supervised Learning and Active Learning for Disfluency Detection

被引:4
|
作者
Wang, Shaolei [1 ]
Wang, Zhongyuan [1 ]
Che, Wanxiang [1 ]
Zhao, Sendong [1 ]
Liu, Ting [1 ]
机构
[1] Harbin Inst Technol, 2 YiKuang St,Tech & Innovat Bldg,HIT Sci Pk, Harbin 150001, Heilongjiang, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Disfluency detection; self-supervised learning; active learning; pre-training technology;
D O I
10.1145/3487290
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spoken language is fundamentally different from the written language in that it contains frequent disfluencies or parts of an utterance that are corrected by the speaker. Disfluency detection (removing these disfluencies) is desirable to clean the input for use in downstream NLP tasks. Most existing approaches to disfluency detection heavily rely on human-annotated data, which is scarce and expensive to obtain in practice. To tackle the training data bottleneck, in this work, we investigate methods for combining self-supervised learning and active learning for disfluency detection. First, we construct large-scale pseudo training data by randomly adding or deleting words fromunlabeled data and propose two self-supervised pre-training tasks: (i) a tagging task to detect the added noisy words and (ii) sentence classification to distinguish original sentences from grammatically incorrect sentences. We then combine these two tasks to jointly pre-train a neural network. The pre-trained neural network is then fine-tuned using human-annotated disfluency detection training data. The self-supervised learning method can capture task-special knowledge for disfluency detection and achieve better performance when fine-tuning on a small annotated dataset compared to other supervised methods. However, limited in that the pseudo training data are generated based on simple heuristics and cannot fully cover all the disfluency patterns, there is still a performance gap compared to the supervised models trained on the full training dataset. We further explore how to bridge the performance gap by integrating active learning during the fine-tuning process. Active learning strives to reduce annotation costs by choosing the most critical examples to label and can address the weakness of self-supervised learning with a small annotated dataset. We show that by combining self-supervised learning with active learning, our model is able to match state-of-the-art performance with just about 10% of the original training data on both the commonly used English Switchboard test set and a set of in-house annotated Chinese data.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] Domain adaptation and self-supervised learning for surgical margin detection
    Santilli, Alice M. L.
    Jamzad, Amoon
    Sedghi, Alireza
    Kaufmann, Martin
    Logan, Kathryn
    Wallis, Julie
    Ren, Kevin Y. M.
    Janssen, Natasja
    Merchant, Shaila
    Engel, Jay
    McKay, Doug
    Varma, Sonal
    Wang, Ami
    Fichtinger, Gabor
    Rudan, John F.
    Mousavi, Parvin
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2021, 16 (05) : 861 - 869
  • [42] CADet: Fully Self-Supervised Anomaly Detection With Contrastive Learning
    Guille-Escuret, Charles
    Rodriguez, Pau
    Vazquez, David
    Mitliagkas, Ioannis
    Monteiro, Joao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [43] Deep anomaly detection with self-supervised learning and adversarial training
    Zhang, Xianchao
    Mu, Jie
    Zhang, Xiaotong
    Liu, Han
    Zong, Linlin
    Li, Yuangang
    PATTERN RECOGNITION, 2022, 121
  • [44] Rumor detection with self-supervised learning on texts and social graph
    Gao, Yuan
    Wang, Xiang
    He, Xiangnan
    Feng, Huamin
    Zhang, Yongdong
    FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (04)
  • [45] Hierarchical Detection of Network Anomalies : A Self-Supervised Learning Approach
    Kye, Hyoseon
    Kim, Miru
    Kwon, Minhae
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1908 - 1912
  • [46] Self-Supervised Learning for Generalizable Out-of-Distribution Detection
    Mohseni, Sina
    Pitale, Mandar
    Yadawa, J. B. S.
    Wang, Zhangyang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5216 - 5223
  • [47] SELF-SUPERVISED ACOUSTIC ANOMALY DETECTION VIA CONTRASTIVE LEARNING
    Hojjati, Hadi
    Armanfard, Narges
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3253 - 3257
  • [48] Shot Contrastive Self-Supervised Learning for Scene Boundary Detection
    Chen, Shixing
    Nie, Xiaohan
    Fan, David
    Zhang, Dongqing
    Bhat, Vimal
    Hamid, Raffay
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9791 - 9800
  • [49] Wrist fracture detection using self-supervised learning methodology
    Thorat, Sachin Ramdas
    Jha, Davendranath G.
    Sharma, Ashish K.
    Katkar, Dhanraj, V
    JOURNAL OF MUSCULOSKELETAL SURGERY AND RESEARCH, 2024, 8 (02) : 133 - 141
  • [50] Adversarial Self-Supervised Learning for Out-of-Domain Detection
    Zeng, Zhiyuan
    He, Keqing
    Yan, Yuanmeng
    Xu, Hong
    Xu, Weiran
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5631 - 5639