Bangla-BERT: Transformer-Based Efficient Model for Transfer Learning and Language Understanding

被引:17
|
作者
Kowsher, M. [1 ]
Sami, Abdullah A. S. [2 ]
Prottasha, Nusrat Jahan [3 ]
Arefin, Mohammad Shamsul [3 ,4 ]
Dhar, Pranab Kumar [4 ]
Koshiba, Takeshi [5 ]
机构
[1] Stevens Inst Technol, Dept Comp Sci, Hoboken, NJ 07030 USA
[2] Chittagong Univ Engn & Technol, Dept Comp Sci & Engn, Chattogram 4349, Bangladesh
[3] Daffodil Int Univ, Dept Comp Sci & Engn, Dhaka 1207, Bangladesh
[4] Chittagong Univ Engn & Technol, Chattogram 4349, Bangladesh
[5] Waseda Univ, Shinjuku Ku, Tokyo 1698050, Japan
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Bit error rate; Learning systems; Transformers; Data models; Computational modeling; Internet; Transfer learning; Bangla NLP; BERT-base; large corpus; transformer;
D O I
10.1109/ACCESS.2022.3197662
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The advent of pre-trained language models has directed a new era of Natural Language Processing (NLP), enabling us to create powerful language models. Among these models, Transformer-based models like BERT have grown in popularity due to their cutting-edge effectiveness. However, these models heavily rely on resource-intensive languages, forcing other languages into multilingual models(mBERT). The two fundamental challenges with mBERT become significantly more challenging in a resource-constrained language like Bangla. It was trained on a limited and organized dataset and contained weights for all other languages. Besides, current research on other languages suggests that a language-specific BERT model will exceed multilingual ones. This paper introduces Bangla-BERT,a a monolingual BERT model for the Bangla language. Despite the limited data available for NLP tasks in Bangla, we perform pre-training on the largest Bangla language model dataset, BanglaLM, which we constructed using 40 GB of text data. Bangla-BERT achieves the highest results in all datasets and vastly improves the state-of-the-art performance in binary linguistic classification, multilabel extraction, and named entity recognition, outperforming multilingual BERT and other previous research. The pre-trained model is assessed against several non-contextual models such as Bangla fasttext and word2vec the downstream tasks. Finally, this model is evaluated by transfer learning based on hybrid deep learning models such as LSTM, CNN, and CRF in NER, and it is observed that Bangla-BERT outperforms state-of-the-art methods. The proposed Bangla-BERT model is assessed by using benchmark datasets, including Banfakenews, Sentiment Analysis on Bengali News Comments, and Cross-lingual Sentiment Analysis in Bengali. Finally, it is concluded that Bangla-BERT surpasses all prior state-of-the-art results by 3.52%, 2.2%, and 5.3%.
引用
收藏
页码:91855 / 91870
页数:16
相关论文
共 50 条
  • [21] Efficient Transformer-based Knowledge Tracing for a Personalized Language Education Application
    Kim, Dae-Eun
    Hong, Changki
    Kim, Woo-Hyun
    PROCEEDINGS OF THE TENTH ACM CONFERENCE ON LEARNING @ SCALE, L@S 2023, 2023, : 336 - 340
  • [22] To BERT or not to BERT: advancing non-invasive prediction of tumor biomarkers using transformer-based natural language processing (NLP)
    Ali S. Tejani
    European Radiology, 2023, 33 : 8014 - 8016
  • [23] AN EMPIRICAL STUDY OF TRANSFORMER-BASED NEURAL LANGUAGE MODEL ADAPTATION
    Li, Ke
    Liu, Zhe
    He, Tianxing
    Huang, Hongzhao
    Peng, Fuchun
    Povey, Daniel
    Khudanpur, Sanjeev
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7934 - 7938
  • [24] TransPolymer: a Transformer-based language model for polymer property predictions
    Xu, Changwen
    Wang, Yuyang
    Farimani, Amir Barati
    NPJ COMPUTATIONAL MATERIALS, 2023, 9 (01)
  • [25] Transformer-Based Single-Cell Language Model: A Survey
    Lan, Wei
    He, Guohang
    Liu, Mingyang
    Chen, Qingfeng
    Cao, Junyue
    Peng, Wei
    BIG DATA MINING AND ANALYTICS, 2024, 7 (04): : 1169 - 1186
  • [26] TransPolymer: a Transformer-based language model for polymer property predictions
    Changwen Xu
    Yuyang Wang
    Amir Barati Farimani
    npj Computational Materials, 9
  • [27] To BERT or not to BERT: advancing non-invasive prediction of tumor biomarkers using transformer-based natural language processing (NLP)
    Tejani, Ali S.
    EUROPEAN RADIOLOGY, 2023, 33 (11) : 8014 - 8016
  • [28] Generating Qualitative Descriptions of Diagrams with a Transformer-Based Language Model
    Schorlemmer, Marco
    Ballout, Mohamad
    Kuehnberger, Kai-Uwe
    DIAGRAMMATIC REPRESENTATION AND INFERENCE, DIAGRAMS 2024, 2024, 14981 : 61 - 75
  • [29] Transfer Learning and Custom Loss Applied to Transformer-Based Text Translation for Sign Language Animated Subtitles
    Aurelrius, Evan
    Henrisen Sikoko, Andhira
    Rakun, Erdefi
    Azizah, Kurniawati
    IEEE Access, 2025, 13 : 36858 - 36876
  • [30] Deciphering "the language of nature": A transformer-based language model for deleterious mutations in proteins
    Jiang, Theodore T.
    Fang, Li
    Wang, Kai
    INNOVATION, 2023, 4 (05):