Automatically resolving conflicts between expert systems: An experimental approach using large language models and fuzzy cognitive maps from participatory modeling studies
被引:0
|
作者:
Schuerkamp, Ryan
论文数: 0引用数: 0
h-index: 0
机构:
Carnegie Mellon Univ, Sch Comp Sci, 5000 Forbes Ave, Pittsburgh, PA 15213 USACarnegie Mellon Univ, Sch Comp Sci, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
Cognitive dissonance;
Fuzzy cognitive maps;
Generative AI;
Large language models;
Mental model;
Natural language generation;
GENERATIVE AI;
OBESITY;
D O I:
10.1016/j.knosys.2025.113151
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
A mental model is an individual's internal representation of knowledge that enables reasoning in a given domain. Cognitive dissonance arises in a mental model when there is internal conflict, causing discomfort, which individuals seek to minimize by resolving the dissonance. Modelers frequently use fuzzy cognitive maps (FCMs) to represent mental models and perspectives on a system and facilitate reasoning. Dissonance may arise in FCMs when two individuals with conflicting mental models interact (e.g., in a hybrid agent-based model with FCMs representing individuals' mental models). We define cognitive dissonance for FCMs and develop an algorithm to automatically resolve it by leveraging large language models (LLMs). We apply our algorithm to our real-world case studies and find our approach can successfully resolve the dissonance, suggesting LLMs can broadly resolve conflict within expert systems. Additionally, our method may identify opportunities for knowledge editing of LLMs when the dissonance cannot be satisfactorily resolved through our algorithm.