Keep Skills in Mind: Understanding and Implementing Skills in Commonsense Question Answering

被引:0
|
作者
Bao, Meikai [1 ,2 ]
Liu, Qi [1 ,2 ]
Zhang, Kai [1 ,2 ]
Liu, Ye [1 ,2 ]
Yue, Linan [1 ,2 ]
Li, Longfei [3 ]
Zhou, Jun [3 ]
机构
[1] Univ Sci & Technol China, Anhui Prov Key Lab Big Data Anal & Applicat, Hefei, Peoples R China
[2] State Key Lab Cognit Intelligence, Beijing, Peoples R China
[3] Ant Financial Serv Grp, Hangzhou, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Commonsense Question Answering (CQA) aims to answer questions that require human commonsense. Closed-book CQA, as one of the subtasks, requires the model to answer questions without retrieving external knowledge, which emphasizes the importance of the model's problem-solving ability. Most previous methods relied on large-scale pre-trained models to generate question-related knowledge while ignoring the crucial role of skills in the process of answering commonsense questions. Generally, skills refer to the learned ability in performing a specific task or activity, which are derived from knowledge and experience. In this paper, we introduce a new approach named Dynamic Skill-aware Commonsense Question Answering (DSCQA), which transcends the limitations of traditional methods by informing the model about the need for each skill in questions and utilizes skills as a critical driver in CQA process. To be specific, DSCQA first employs commonsense skill extraction module to generate various skill representations. Then, DSCQA utilizes dynamic skill module to generate dynamic skill representations. Finally, in perception and emphasis module, various skills and dynamic skill representations are used to help question-answering process. Experimental results on two publicly available CQA datasets show the effectiveness of our proposed model and the considerable impact of introducing skills.
引用
收藏
页码:5012 / 5020
页数:9
相关论文
共 50 条
  • [31] Mind the skills gap
    Jon Evans
    Nature Medicine, 2009, 15 (6) : 589 - 589
  • [32] elBERto: Self-supervised commonsense learning for question answering
    Zhan, Xunlin
    Li, Yuan
    Dong, Xiao
    Liang, Xiaodan
    Hu, Zhiting
    Carin, Lawrence
    KNOWLEDGE-BASED SYSTEMS, 2022, 258
  • [33] Multi-level Contrastive Learning for Commonsense Question Answering
    Fang, Quntian
    Huang, Zhen
    Zhang, Ziwen
    Hu, Minghao
    Hu, Biao
    Wang, Ankun
    Li, Dongsheng
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2023, 2023, 14120 : 318 - 331
  • [34] Commonsense Properties from Query Logs and Question Answering Forums
    Romero, Julien
    Razniewski, Simon
    Pal, Koninika
    Pan, Jeff Z.
    Sakhadeo, Archit
    Weikum, Gerhard
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 1411 - 1420
  • [35] Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering
    Wang, Peifeng
    Peng, Nanyun
    Ilievski, Filip
    Szekely, Pedro
    Ren, Xiang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4129 - 4140
  • [36] Choice-Driven Contextual Reasoning for Commonsense Question Answering
    Deng, Wenqing
    Wang, Zhe
    Wang, Kewen
    Zhang, Xiaowang
    Feng, Zhiyong
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2022, 13630 : 335 - 346
  • [37] Commonsense for Generative Multi-Hop Question Answering Tasks
    Bauer, Lisa
    Wang, Yicheng
    Bansal, Mohit
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4220 - 4230
  • [38] SKILLS AND UNDERSTANDING
    Cavers, David F.
    JOURNAL OF LEGAL EDUCATION, 1949, 1 (03) : 395 - 402
  • [39] IT CONVERSION COURSES - ANSWERING THE SKILLS SHORTAGE
    GUY, C
    ELECTRONICS AND POWER, 1986, 32 (03): : 215 - 216
  • [40] External Commonsense Knowledge as a Modality for Social Intelligence Question-Answering
    Natu, Sanika
    Sural, Shounak
    Sarkar, Sulagna
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3036 - 3042