共 50 条
- [41] TaskLAMA: Probing the Complex Task Understanding of Language Models THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19468 - 19476
- [42] Probing for Hyperbole in Pre-Trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
- [43] Probing for Predicate Argument Structures in Pretrained Language Models PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4622 - 4632
- [44] Probing Pretrained Language Models for Semantic Attributes and their Values FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2554 - 2559
- [45] Propositional Reasoning via Neural Transformer Language Models NEURAL-SYMBOLIC LEARNING AND REASONING, NESY 2022, 2022, : 104 - 119
- [46] Can Transformer Language Models Predict Psychometric Properties? 10TH CONFERENCE ON LEXICAL AND COMPUTATIONAL SEMANTICS (SEM 2021), 2021, : 12 - 25
- [47] Improved Hybrid Streaming ASR with Transformer Language Models INTERSPEECH 2020, 2020, : 2127 - 2131
- [48] Comparing Symbolic Models of Language via Bayesian Inference THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15799 - 15800
- [49] Sources of Hallucination by Large Language Models on Inference Tasks FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2758 - 2774
- [50] Do Language Models Perform Generalizable Commonsense Inference? FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3681 - 3688