Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling

被引:0
|
作者
Wang, Jixuan [1 ,2 ,3 ]
Wei, Kai [3 ]
Radfar, Martin [3 ]
Zhang, Weiwei [3 ]
Chung, Clement [3 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] Vector Inst, Toronto, ON, Canada
[3] Amazon Alexa, Pittsburgh, PA 15205 USA
关键词
NEURAL-NETWORKS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling. Specifically, we encode syntactic knowledge into the Transformer encoder by jointly training it to predict syntactic parse ancestors and part-of-speech of each token via multi-task learning. Our model is based on self-attention and feed-forward layers and does not require external syntactic information to be available at inference time. Experiments show that on two benchmark datasets, our models with only two Transformer encoder layers achieve state-of-the-art results. Compared to the previously best performed model without pre-training, our models achieve absolute F1 score and accuracy improvement of 1.59% and 0.85% for slot filling and intent detection on the SNIPS dataset, respectively. Our models also achieve absolute F1 score and accuracy improvement of 0.1% and 0.34% for slot filling and intent detection on the ATIS dataset, respectively, over the previously best performed model. Furthermore, the visualization of the self-attention weights illustrates the benefits of incorporating syntactic information during training.
引用
收藏
页码:13943 / 13951
页数:9
相关论文
共 50 条
  • [31] Task Conditioned BERT for Joint Intent Detection and Slot-Filling
    Tavares, Diogo
    Azevedo, Pedro
    Semedo, David
    Sousa, Ricardo
    Magalhaes, Joao
    PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I, 2023, 14115 : 467 - 480
  • [32] Joint Slot Filling and Intent Detection via Capsule Neural Networks
    Zhang, Chenwei
    Li, Yaliang
    Du, Nan
    Fan, Wei
    Yu, Philip S.
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5259 - 5267
  • [33] Decoupling Representation and Knowledge for Few-Shot Intent Classification and Slot Filling
    Han, Jie
    Zou, Yixiong
    Wang, Haozhao
    Wang, Jun
    Liu, Wei
    Wu, Yao
    Zhang, Tao
    Li, Ruixuan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18171 - 18179
  • [34] JOINT MULTIPLE INTENT DETECTION AND SLOT FILLING VIA SELF-DISTILLATION
    Chen, Lisong
    Zhou, Peilin
    Zou, Yuexian
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7612 - 7616
  • [35] Joint Intent Detection and Slot Filling via CNN-LSTM-CRF
    Kane, Bamba
    Rossi, Fabio
    Guinaudeau, Ophelie
    Chiesa, Valeria
    Quenel, Ilhem
    Chau, Stephan
    2020 6TH IEEE CONGRESS ON INFORMATION SCIENCE AND TECHNOLOGY (IEEE CIST'20), 2020, : 342 - 347
  • [36] The Impact of Data Challenges on Intent Detection and Slot Filling for the Home Assistant Scenario
    Stoica, Anda
    Kadar, Tibor
    Lemnaru, Camelia
    Potolea, Rodica
    Dinsoreanu, Mihaela
    2019 IEEE 15TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP 2019), 2019, : 41 - 47
  • [37] Joint intent detection and slot filling with wheel-graph attention networks
    Wei, Pengfei
    Zeng, Bi
    Liao, Wenxiong
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 42 (03) : 2409 - 2420
  • [38] ACJIS: A Novel Attentive Cross Approach For Joint Intent Detection And Slot Filling
    Yu, Shuai
    Shen, Lei
    Zhu, Pengcheng
    Chen, Jiansong
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [39] A Fast Attention Network for Joint Intent Detection and Slot Filling on Edge Devices
    Huang L.
    Liang S.
    Ye F.
    Gao N.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (02): : 530 - 540
  • [40] Intent Detection and Slot Filling with Capsule Net Architectures for a Romanian Home Assistant
    Stoica, Anda
    Kadar, Tibor
    Lemnaru, Camelia
    Potolea, Rodica
    Dinsoreanu, Mihaela
    SENSORS, 2021, 21 (04) : 1 - 28