Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing

被引:0
|
作者
Liu, Qian [1 ]
Yang, Dejian [2 ]
Zhang, Jiahui [1 ]
Guo, Jiaqi [3 ]
Zhou, Bin [1 ]
Lou, Jian-Guang [2 ]
机构
[1] Beihang Univ, Beijing, Peoples R China
[2] Microsoft Res, Beijing, Peoples R China
[3] Xi An Jiao Tong Univ, Xian, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years pretrained language models (PLMs) hit a success on several downstream tasks, showing their power on modeling language. To better understand and leverage what PLMs have learned, several techniques have emerged to explore syntactic structures entailed by PLMs. However, few efforts have been made to explore grounding capabilities of PLMs, which are also essential. In this paper, we highlight the ability of PLMs to discover which token should be grounded to which concept, if combined with our proposed erasing-then-awakening approach. Empirical studies on four datasets demonstrate that our approach can awaken latent grounding which is understandable to human experts, even if it is not exposed to such labels during training. More importantly, our approach shows great potential to benefit downstream semantic parsing models. Taking text-to-SQL as a case study, we successfully couple our approach with two off-the-shelf parsers, obtaining an absolute improvement of up to 9.8%.
引用
收藏
页码:1174 / 1189
页数:16
相关论文
共 50 条
  • [1] Unsupervised and few-shot parsing from pretrained language models
    Zeng, Zhiyuan
    Xiong, Deyi
    ARTIFICIAL INTELLIGENCE, 2022, 305
  • [2] Extracting Latent Steering Vectors from Pretrained Language Models
    Subramani, Nishant
    Suresh, Nivedita
    Peters, Matthew E.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 566 - 581
  • [3] Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)
    Zeng, Zhiyuan
    Xiong, Deyi
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6995 - 7000
  • [4] Probing Pretrained Language Models for Semantic Attributes and their Values
    Beloucif, Meriem
    Biemann, Chris
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2554 - 2559
  • [5] Two Language Models Using Chinese Semantic Parsing
    李明琴
    王侠
    王作英
    Tsinghua Science and Technology, 2006, (05) : 582 - 588
  • [6] Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data
    Qin, Guanghui
    Yao, Jin-Ge
    Wang, Xuening
    Wang, Jinpeng
    Lin, Chin-Yew
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 3761 - 3771
  • [7] Semantic Exploration from Language Abstractions and Pretrained Representations
    Tam, Allison C.
    Rabinowitz, Neil C.
    Lampinen, Andrew K.
    Roy, Nicholas A.
    Chan, Stephanie C. Y.
    Strouse, D. J.
    Wang, Jane X.
    Banino, Andrea
    Hill, Felix
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] Constructing Taxonomies from Pretrained Language Models
    Chen, Catherine
    Lin, Kevin
    Klein, Dan
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 4687 - 4700
  • [9] A Survey of Pretrained Language Models
    Sun, Kaili
    Luo, Xudong
    Luo, Michael Y.
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT II, 2022, 13369 : 442 - 456
  • [10] Few-Shot Semantic Parsing with Language Models Trained on Code
    Shin, Richard
    Van Durme, Benjamin
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5417 - 5425