共 50 条
- [1] Sources of Hallucination by Large Language Models on Inference Tasks FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2758 - 2774
- [4] Mitigating Factual Inconsistency and Hallucination in Large Language Models PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1169 - 1170
- [5] Chain-of-Verification Reduces Hallucination in Large Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 3563 - 3578
- [6] Untangling Emotional Threads: Hallucination Networks of Large Language Models COMPLEX NETWORKS & THEIR APPLICATIONS XII, VOL 1, COMPLEX NETWORKS 2023, 2024, 1141 : 202 - 214
- [7] Evaluating Object Hallucination in Large Vision-Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 292 - 305
- [9] HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 6449 - 6464
- [10] Hallucination Detection for Generative Large Language Models by Bayesian Sequential Estimation 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 15361 - 15371