Widening the bottleneck of lexical choice for non-autoregressive translation

被引:0
|
作者
Ding, Liang [1 ]
Wang, Longyue [2 ]
Liu, Siyou [3 ]
Luo, Weihua [2 ]
Zhang, Kaifu [2 ]
机构
[1] Univ Sydney, Sydney, Australia
[2] Alibaba Int Digital Commerce, Hangzhou, Peoples R China
[3] Univ Macau, Macau, Peoples R China
来源
关键词
Lexical choice; Non-autoregressive translation; Low-frequency word; Knowledge distillation; New benchmark;
D O I
10.1016/j.csl.2024.101765
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, non-autoregressive models have enjoyed great popularity in natural language processing (NLP) communities, and slowly crept into the main body of research such as speech recognition and computer vision. Non-autoregressive translation (NAT) has been proposed to improve the decoding efficiency of translation models by predicting all tokens independently and simultaneously. To reduce the complexity of the raw data, knowledge distillation (KD) is the preliminary step for training NAT models by leveraging autoregressive translation (AT). In this study, we first reveal that the discrepancy between the raw and the KD data leads to lexical choice errors on predicting low-frequency words. Then we bridge the gap by exploiting three architecture-free approaches without introducing any computational cost: (1) Model Level, where we introduce an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data; (2) Parallel Data Level, where we reactivate low-frequency information by proposing raw pre-training and reverse KD training; (3) Monolingual Data Level, where we transfer both the knowledge of the bilingual raw data and that of the new monolingual data to the NAT model. We conduct experiments on widely-used NAT benchmarks (i.e. WMT14 English-German and WMT16 Romanian-English) over two advanced NAT architectures. Results demonstrate that the proposed approaches can significantly and universally improve translation quality by reducing translation errors on low-frequency words. Extensive analyses demonstrate that (1) these approach generates translations that contain more low-frequency words; (2) these techniques can be used together profitably to further recall the useful information lost in the standard KD; (3) enlarging the monolingual data consistently improves the BLEU scores, while this trend does not hold when further scaling the monolingual data. To this end, we establish a new NAT benchmarks by validating our approaches on three additional datasets varying from languages and scales (i.e. WMT17 Chinese-English, WMT19 English-German and WAT17 Japanese-English). We will release data, code and models, which we hope can significantly promote research in this field.2
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Progressive Multi-Granularity Training for Non-Autoregressive Translation
    Ding, Liang
    Wang, Longyue
    Liu, Xuebo
    Wong, Derek F.
    Tao, Dacheng
    Tu, Zhaopeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2797 - 2803
  • [42] Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input
    Guo, Junliang
    Tan, Xu
    He, Di
    Qin, Tao
    Xu, Linli
    Liu, Tie-Yan
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3723 - 3730
  • [43] Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation
    Shao, Chenze
    Feng, Yang
    Zhang, Jinchao
    Meng, Fandong
    Chen, Xilin
    Zhou, Jie
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3013 - 3024
  • [44] A Lexical-aware Non-autoregressive Transformer-based ASR Model
    Lin, Chong-En
    Chen, Kuan-Yu
    INTERSPEECH 2023, 2023, : 1434 - 1438
  • [45] Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation
    Du, Cunxiao
    Tu, Zhaopeng
    Jiang, Jing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [46] A Study of Syntactic Multi-Modality in Non-Autoregressive Machine Translation
    Zhang, Kexun
    Wang, Rui
    Tan, Xu
    Guo, Junliang
    Ren, Yi
    Qin, Tao
    Liu, Tie-Yan
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1747 - 1757
  • [47] Alleviating repetitive tokens in non-autoregressive machine translation with unlikelihood training
    Wang, Shuheng
    Shi, Shumin
    Huang, Heyan
    SOFT COMPUTING, 2024, 28 (5) : 4681 - 4688
  • [48] Alleviating repetitive tokens in non-autoregressive machine translation with unlikelihood training
    Shuheng Wang
    Shumin Shi
    Heyan Huang
    Soft Computing, 2024, 28 : 4681 - 4688
  • [49] Sequence-Level Training for Non-Autoregressive Neural Machine Translation
    Shao, Chenze
    Feng, Yang
    Zhang, Jinchao
    Meng, Fandong
    Zhou, Jie
    COMPUTATIONAL LINGUISTICS, 2021, 47 (04) : 891 - 925
  • [50] Improving Non-autoregressive Machine Translation with Error Exposure and Consistency Regularization
    Chen, Xinran
    Duan, Sufeng
    Liu, Gongshen
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 240 - 252