共 50 条
- [1] Compressing Context to Enhance Inference Efficiency of Large Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 6342 - 6353
- [2] Measuring and Improving the Energy Efficiency of Large Language Models Inference IEEE ACCESS, 2024, 12 : 80194 - 80207
- [3] Language Models for Lexical Inference in Context 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1267 - 1280
- [4] Extending Context Window of Large Language Models via Semantic Compression FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5169 - 5181
- [6] Inference to the Best Explanation in Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 217 - 235
- [7] Assessing Inference Time in Large Language Models SYSTEM DEPENDABILITY-THEORY AND APPLICATIONS, DEPCOS-RELCOMEX 2024, 2024, 1026 : 296 - 305
- [9] Sources of Hallucination by Large Language Models on Inference Tasks FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2758 - 2774
- [10] GPT-RE: In-context Learning for Relation Extraction using Large Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3534 - 3547