Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding

被引:0
|
作者
Ghaddar, Abbas [1 ]
Wu, Yimeng [1 ]
Bagga, Sunyam [1 ]
Rashid, Ahmad [1 ]
Bibi, Khalil [1 ]
Rezagholizadeh, Mehdi [1 ]
Xing, Chao [1 ]
Wang, Yasheng [1 ]
Xinyu, Duan [2 ]
Wang, Zhefeng [2 ]
Huai, Baoxing [2 ]
Jiang, Xin [1 ]
Liu, Qun [1 ]
Langlais, Philippe [3 ]
机构
[1] Huawei Technologies Co., Ltd., China
[2] Huawei Cloud Computing Technologies Co., Ltd, China
[3] RALI/DIRO, Université de Montréal, Canada
来源
arXiv | 2022年
关键词
Compendex;
D O I
暂无
中图分类号
学科分类号
摘要
Benchmarking
引用
收藏
相关论文
共 50 条
  • [31] Modeling Second Language Acquisition with pre-trained neural language models
    Palenzuela, Alvaro J. Jimenez
    Frasincar, Flavius
    Trusca, Maria Mihaela
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 207
  • [32] Error Investigation of Pre-trained BERTology Models on Vietnamese Natural Language Inference
    Tin Van Huynh
    Huy Quoc To
    Kiet Van Nguyen
    Ngan Luu-Thuy Nguyen
    RECENT CHALLENGES IN INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2022, 2022, 1716 : 176 - 188
  • [33] ARoBERT: An ASR Robust Pre-Trained Language Model for Spoken Language Understanding
    Wang, Chengyu
    Dai, Suyang
    Wang, Yipeng
    Yang, Fei
    Qiu, Minghui
    Chen, Kehan
    Zhou, Wei
    Huang, Jun
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1207 - 1218
  • [34] Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models
    Zhu, Biru
    Qin, Yujia
    Cui, Ganqu
    Chen, Yangyi
    Zhao, Weilin
    Fu, Chong
    Deng, Yangdong
    Liu, Zhiyuan
    Wang, Jingang
    Wu, Wei
    Sun, Maosong
    Gu, Ming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [35] Probing Pre-Trained Language Models for Disease Knowledge
    Alghanmi, Israa
    Espinosa-Anke, Luis
    Schockaert, Steven
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3023 - 3033
  • [36] Analyzing Individual Neurons in Pre-trained Language Models
    Durrani, Nadir
    Sajjad, Hassan
    Dalvi, Fahim
    Belinkov, Yonatan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880
  • [37] Emotional Paraphrasing Using Pre-trained Language Models
    Casas, Jacky
    Torche, Samuel
    Daher, Karl
    Mugellini, Elena
    Abou Khaled, Omar
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,
  • [38] Dynamic Knowledge Distillation for Pre-trained Language Models
    Li, Lei
    Lin, Yankai
    Ren, Shuhuai
    Li, Peng
    Zhou, Jie
    Sun, Xu
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 379 - 389
  • [39] Prompt Tuning for Discriminative Pre-trained Language Models
    Yao, Yuan
    Dong, Bowen
    Zhang, Ao
    Zhang, Zhengyan
    Xie, Ruobing
    Liu, Zhiyuan
    Lin, Leyu
    Sun, Maosong
    Wang, Jianyong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
  • [40] Impact of Morphological Segmentation on Pre-trained Language Models
    Westhelle, Matheus
    Bencke, Luciana
    Moreira, Viviane P.
    INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 402 - 416