Exploring Human-Like Translation Strategy with Large Language Models

被引:11
|
作者
He, Zhiwei [1 ]
Liang, Tian [2 ]
Jiao, Wenxiang [3 ]
Zhang, Zhuosheng [1 ]
Yang, Yujiu
Wang, Rui [1 ]
Tu, Zhaopeng [3 ]
Shi, Shuming [3 ]
Wang, Xing [3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Tsinghua Univ, Tsinghua, Peoples R China
[3] Tencent AI Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational linguistics;
D O I
10.1162/tacl_a_00642
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have demonstrated impressive capabilities in general scenarios, exhibiting a level of aptitude that approaches, in some aspects even surpasses, human-level intelligence. Among their numerous skills, the translation abilities of LLMs have received considerable attention. Compared to typical machine translation that focuses solely on source-to-target mapping, LLM-based translation can potentially mimic the human translation process, which might take preparatory steps to ensure high-quality translation. This work explores this possibility by proposing the MAPS framework, which stands for Multi-Aspect Prompting and Selection. Specifically, we enable LLMs first to analyze the given source sentence and induce three aspects of translation-related knowledge (keywords, topics, and relevant demonstrations) to guide the final translation process. Moreover, we employ a selection mechanism based on quality estimation to filter out noisy and unhelpful knowledge. Both automatic (3 LLMs x 11 directions x 2 automatic metrics) and human evaluation (preference study and MQM) demonstrate the effectiveness of MAPS. Further analysis shows that by mimicking the human translation process, MAPS reduces various translation errors such as hallucination, ambiguity, mistranslation, awkward style, untranslated text, and omission. Source code is available at https://github.com/zwhe99/MAPS-mt.
引用
收藏
页码:229 / 246
页数:18
相关论文
共 50 条
  • [1] Towards Human-Like Educational Question Generation with Large Language Models
    Wang, Zichao
    Valdez, Jakob
    Mallick, Debshila Basu
    Baraniuk, Richard G.
    ARTIFICIAL INTELLIGENCE IN EDUCATION, PT I, 2022, 13355 : 153 - 166
  • [2] STEREOMAP: Quantifying the Awareness of Human-like Stereotypes in Large Language Models
    Jeoung, Sullam
    Ge, Yubin
    Diesner, Jana
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 12236 - 12256
  • [3] Do Large Language Models Show Human-like Biases? Exploring Confidence-Competence Gap in AI
    Singh, Aniket Kumar
    Lamichhane, Bishal
    Devkota, Suman
    Dhakal, Uttam
    Dhakal, Chandra
    INFORMATION, 2024, 15 (02)
  • [4] Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases
    Ando, Risako
    Morishita, Takanobu
    Abe, Hirohiko
    Mineshima, Koji
    Okada, Mitsuhiro
    arXiv, 2023,
  • [5] Human-like problem-solving abilities in large language models using ChatGPT
    Orru, Graziella
    Piarulli, Andrea
    Conversano, Ciro
    Gemignani, Angelo
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [6] Exploring Human-Like Reading Strategy for Abstractive Text Summarization
    Yang, Min
    Qu, Qiang
    Tu, Wenting
    Shen, Ying
    Zhao, Zhou
    Chen, Xiaojun
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 7362 - 7369
  • [7] Exploring Large Language Models in Intent Acquisition and Translation
    Fontana, Mattia
    Martini, Barbara
    Sciarrone, Filippo
    2024 IEEE 10TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT 2024, 2024, : 231 - 234
  • [8] Drive as You Speak: Enabling Human-Like Interaction with Large Language Models in Autonomous Vehicles
    Cui, Can
    Ma, Yunsheng
    Cao, Xu
    Ye, Wenqian
    Wang, Ziran
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 902 - 909
  • [9] Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT
    Hagendorff, Thilo
    Fabi, Sarah
    Kosinski, Michal
    NATURE COMPUTATIONAL SCIENCE, 2023, 3 (10): : 833 - +
  • [10] Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT
    Thilo Hagendorff
    Sarah Fabi
    Michal Kosinski
    Nature Computational Science, 2023, 3 : 833 - 838