QuesBELM: A BERT based Ensemble Language Model for Natural Questions

被引:0
|
作者
Pranesh, Raj Ratn [1 ]
Shekhar, Ambesh [1 ]
Pallavi, Smita [1 ]
机构
[1] Birla Inst Technol, Mesra, India
关键词
Ensemble model; Question Answering; Deep learning; Natural Language Processing; Transformer Architecture;
D O I
10.1109/icccs49678.2020.9277176
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions related to random searches about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. In this paper, we addressed the task of question-answering (QA) systems on Google's Natural Questions (NQ) dataset containing real user questions issued to Google search and the answers found from Wikipedia by annotators. In our work, we systematically compare the performance of powerful variant models of Transformer architectures- 'BERT-base. BERT-large-WWNI and ALBERT-XXL' over Natural Questions dataset. We also propose a state-of-the-art BERT based ensemble language model- QuesBELM. QuesBELM leverages the power of existing BERT variants combined together to build a more accurate stacking ensemble model for question answering (QA) system. The model integrates top-K predictions from single language models to determine the best answer out of all. Our model surpassed the baseline language models with the Harmonic mean score of 0.731 and 0.582 for the long answer(LA) and short answer(SA) tasks respectively, reporting an average of 10% improvement over the baseline models.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] To BERT or not to BERT: advancing non-invasive prediction of tumor biomarkers using transformer-based natural language processing (NLP)
    Ali S. Tejani
    European Radiology, 2023, 33 : 8014 - 8016
  • [42] BERT-Based Medical Chatbot: Enhancing Healthcare Communication through Natural Language Understanding
    Babu, Arun
    Boddu, Sekhar Babu
    EXPLORATORY RESEARCH IN CLINICAL AND SOCIAL PHARMACY, 2024, 13
  • [43] To BERT or not to BERT: advancing non-invasive prediction of tumor biomarkers using transformer-based natural language processing (NLP)
    Tejani, Ali S.
    EUROPEAN RADIOLOGY, 2023, 33 (11) : 8014 - 8016
  • [44] A BERT-based Language Modeling Framework
    Chien, Chin-Yueh
    Chen, Kuan-Yu
    INTERSPEECH 2022, 2022, : 699 - 703
  • [45] Semantic Role Labeling For Russian Language Based on Ensemble Model
    Zheng, Xinping
    Zhou, Bin
    Huang, Jiuming
    Liu, Yunxuan
    Wang, Hao
    Wang, Zhichao
    PROCEEDINGS OF 2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019), 2019, : 1263 - 1268
  • [46] Natural Language Understanding for Grading Essay Questions in Persian Language
    Mokhtari-Fard, Iman
    CHINESE COMPUTATIONAL LINGUISTICS AND NATURAL LANGUAGE PROCESSING BASED ON NATURALLY ANNOTATED BIG DATA, 2013, 8208 : 144 - 153
  • [47] BERT-based Ensemble Approaches for Hate Speech Detection
    Mnassri, Khouloud
    Rajapaksha, Praboda
    Farahbakhsh, Reza
    Crespi, Noel
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 4649 - 4654
  • [48] QUESTIONS NATURAL-LANGUAGE EXAMPLES IN CADUCEUS
    BOYCE, BR
    ONLINE, 1986, 10 (02): : 54 - 54
  • [49] Getting answers to natural language questions on the Web
    Radev, DR
    Libner, K
    Fan, WG
    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 2002, 53 (05): : 359 - 364
  • [50] Predicting Difficulty and Discrimination of Natural Language Questions
    Byrd, Matthew A.
    Srivastava, Shashank
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 119 - 130