共 50 条
- [1] Towards Mitigating Hallucination in Large Language Models via Self-Reflection FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 1827 - 1843
- [2] Mitigating Factual Inconsistency and Hallucination in Large Language Models PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1169 - 1170
- [4] WORDFLOW: Social Prompt Engineering for Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 3: SYSTEM DEMONSTRATIONS, 2024, : 42 - 50
- [6] Mitigating Value Hallucination in Dyna-Style Planning via Multistep Predecessor Models Journal of Artificial Intelligence Research, 2024, 80 : 441 - 473
- [7] Mitigating Value Hallucination in Dyna-Style Planning via Multistep Predecessor Models JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 441 - 473
- [8] Prompt Engineering: Guiding the Way to Effective Large Language Models Iraqi Journal for Computer Science and Mathematics, 2023, 4 (04): : 151 - 155
- [9] Mitigating Hallucination in Visual-Language Models via Re-balancing Contrastive Decoding PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 482 - 496
- [10] Empowerment of Large Language Models in Psychological Counseling through Prompt Engineering 2024 IEEE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE, SEAI 2024, 2024, : 220 - 225