Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction

被引:0
|
作者
Alt, Christoph [1 ]
Gabryszak, Aleksandra [1 ]
Hennig, Leonhard [1 ]
机构
[1] German Res Ctr Artificial Intelligence DFKI, Speech & Language Technol Lab, Kaiserslautern, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the recent progress, little is known about the features captured by state-of-the-art neural relation extraction (RE) models. Common methods encode the source sentence, conditioned on the entity mentions, before classifying the relation. However, the complexity of the task makes it difficult to understand how encoder architecture and supporting linguistic knowledge affect the features learned by the encoder. We introduce 14 probing tasks targeting linguistic properties relevant to RE, and we use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets, TACRED and SemEval 2010 Task 8. We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance. For example, adding contextualized word representations greatly increases performance on probing tasks with a focus on named entity and part-of-speech information, and yields better results in RE. In contrast, entity masking improves RE, but considerably lowers performance on entity type related probing tasks.
引用
收藏
页码:1534 / 1545
页数:12
相关论文
共 50 条
  • [31] Feature-Level Attention Based Sentence Encoding for Neural Relation Extraction
    Dai, Longqi
    Xu, Bo
    Song, Hui
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING (NLPCC 2019), PT I, 2019, 11838 : 184 - 196
  • [32] Sentence-Level Content Planning and Style Specification for Neural Text Generation
    Hua, Xinyu
    Wang, Lu
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 591 - 602
  • [33] Chinese Sentence-level Event Factuality Identification with Recursive Neural Network
    Yi, Qingqing
    Qian, Zhong
    Li, Peifeng
    Zhu, Qiaoming
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [34] Sentence-level control vectors for deep neural network speech synthesis
    Watts, Oliver
    Wu, Zhizheng
    King, Simon
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 2217 - 2221
  • [35] An Approach Based on Multilevel Convolution for Sentence-Level Element Extraction of Legal Text
    Chen, Zhe
    Zhang, Hongli
    Ye, Lin
    Li, Shang
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [36] MeSH qualifiers, publication types and relation occurrence frequency are also useful for a better sentence-level extraction of biomedical relations
    Turki, Houcemeddine
    Taieb, Mohamed Ali Hadj
    Ben Aouicha, Mohamed
    JOURNAL OF BIOMEDICAL INFORMATICS, 2018, 83 : 217 - 218
  • [37] Aggregating Sentence-level Features for Chinese Near-duplicate Document Detection
    Liang, Yan
    Tao, Yizheng
    Feng, Ning
    Wan, Zhenjing
    Xu, Feng
    Jiang, Xue
    Gao, Shan
    PROCEEDINGS OF THE 2017 IEEE 14TH INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL (ICNSC 2017), 2017, : 174 - 179
  • [38] A Deep Neural Architecture for Sentence-Level Sentiment Classification in Twitter Social Networking
    Huy Nguyen
    Minh-Le Nguyen
    COMPUTATIONAL LINGUISTICS, PACLING 2017, 2018, 781 : 15 - 27
  • [39] How the Brain Dynamically Constructs Sentence-Level Meanings From Word-Level Features
    Aguirre-Celis, Nora
    Miikkulainen, Risto
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [40] Using Sentence-level Classification Helps Entity Extraction from Material Science Literature
    Mullick, Ankan
    Pal, Shubhraneel
    Nayak, Tapas
    Lee, Seung-Cheol
    Bhattacharjee, Satadeep
    Goyal, Pawan
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 4540 - 4545