Widening the bottleneck of lexical choice for non-autoregressive translation

被引:0
|
作者
Ding, Liang [1 ]
Wang, Longyue [2 ]
Liu, Siyou [3 ]
Luo, Weihua [2 ]
Zhang, Kaifu [2 ]
机构
[1] Univ Sydney, Sydney, Australia
[2] Alibaba Int Digital Commerce, Hangzhou, Peoples R China
[3] Univ Macau, Macau, Peoples R China
来源
关键词
Lexical choice; Non-autoregressive translation; Low-frequency word; Knowledge distillation; New benchmark;
D O I
10.1016/j.csl.2024.101765
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, non-autoregressive models have enjoyed great popularity in natural language processing (NLP) communities, and slowly crept into the main body of research such as speech recognition and computer vision. Non-autoregressive translation (NAT) has been proposed to improve the decoding efficiency of translation models by predicting all tokens independently and simultaneously. To reduce the complexity of the raw data, knowledge distillation (KD) is the preliminary step for training NAT models by leveraging autoregressive translation (AT). In this study, we first reveal that the discrepancy between the raw and the KD data leads to lexical choice errors on predicting low-frequency words. Then we bridge the gap by exploiting three architecture-free approaches without introducing any computational cost: (1) Model Level, where we introduce an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data; (2) Parallel Data Level, where we reactivate low-frequency information by proposing raw pre-training and reverse KD training; (3) Monolingual Data Level, where we transfer both the knowledge of the bilingual raw data and that of the new monolingual data to the NAT model. We conduct experiments on widely-used NAT benchmarks (i.e. WMT14 English-German and WMT16 Romanian-English) over two advanced NAT architectures. Results demonstrate that the proposed approaches can significantly and universally improve translation quality by reducing translation errors on low-frequency words. Extensive analyses demonstrate that (1) these approach generates translations that contain more low-frequency words; (2) these techniques can be used together profitably to further recall the useful information lost in the standard KD; (3) enlarging the monolingual data consistently improves the BLEU scores, while this trend does not hold when further scaling the monolingual data. To this end, we establish a new NAT benchmarks by validating our approaches on three additional datasets varying from languages and scales (i.e. WMT17 Chinese-English, WMT19 English-German and WAT17 Japanese-English). We will release data, code and models, which we hope can significantly promote research in this field.2
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Correcting translation for non-autoregressive transformer
    Wang, Shuheng
    Huang, Heyan
    Shi, Shumin
    Li, Dongbai
    Guo, Dongen
    APPLIED SOFT COMPUTING, 2025, 168
  • [2] Revisiting Non-Autoregressive Translation at Scale
    Wang, Zhihao
    Wang, Longyue
    Su, Jinsong
    Yao, Junfeng
    Tu, Zhaopeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 12051 - 12065
  • [3] Incorporating a local translation mechanism into non-autoregressive translation
    Kong, Xiang
    Zhang, Zhisong
    Hovy, Eduard
    arXiv, 2020,
  • [4] Integrating Translation Memories into Non-Autoregressive Machine Translation
    Xu, Jitao
    Crego, Josep
    Yvon, Francois
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1326 - 1338
  • [5] Incorporating a Local Translation Mechanism into Non-autoregressive Translation
    Kong, Xiang
    Zhang, Zhisong
    Hovy, Eduard
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1067 - 1073
  • [6] Widening the Representation Bottleneck in Neural Machine Translation with Lexical Shortcuts
    Emelin, Denis
    Titov, Ivan
    Sennrich, Rico
    FOURTH CONFERENCE ON MACHINE TRANSLATION (WMT 2019), VOL 1: RESEARCH PAPERS, 2019, : 102 - 115
  • [7] Non-autoregressive Streaming Transformer for Simultaneous Translation
    Ma, Zhengrui
    Zhang, Shaolei
    Guo, Shoutao
    Shao, Chenze
    Zhang, Min
    Feng, Yang
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5177 - 5190
  • [8] Enhanced encoder for non-autoregressive machine translation
    Wang, Shuheng
    Shi, Shumin
    Huang, Heyan
    MACHINE TRANSLATION, 2021, 35 (04) : 595 - 609
  • [9] Neighbors Are Not Strangers: Improving Non-Autoregressive Translation under Low-Frequency Lexical Constraints
    Zeng, Chun
    Chen, Jiangjie
    Zhuang, Tianyi
    Xu, Rui
    Yang, Hao
    Ying, Qin
    Tao, Shimin
    Xiao, Yanghua
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5777 - 5790
  • [10] Rephrasing the Reference for Non-autoregressive Machine Translation
    Shao, Chenze
    Zhang, Jinchao
    Zhou, Jie
    Feng, Yang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13538 - 13546