共 50 条
- [41] Sparse Pairwise Re-ranking with Pre-trained Transformers PROCEEDINGS OF THE 2022 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2022, 2022, : 250 - 258
- [42] An empirical study of pre-trained language models in simple knowledge graph question answering WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (05): : 2855 - 2886
- [43] BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 2281 - 2290
- [44] An empirical study of pre-trained language models in simple knowledge graph question answering World Wide Web, 2023, 26 : 2855 - 2886
- [46] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4487 - 4497
- [48] Routing Generative Pre-Trained Transformers for Printed Circuit Board 2024 INTERNATIONAL SYMPOSIUM OF ELECTRONICS DESIGN AUTOMATION, ISEDA 2024, 2024, : 160 - 165