共 50 条
How do you know that you don't know?
被引:0
|作者:
Gronau, Quentin F.
[1
,3
]
Steyvers, Mark
[2
]
Brown, Scott D.
[1
]
机构:
[1] Univ Newcastle, Sch Psychol Sci, Callaghan, Australia
[2] Univ Calif Irvine, Dept Cognit Sci, Irvine, CA USA
[3] Sch Psychol Sci, Callaghan Campus, Callaghan, NSW 2308, Australia
来源:
基金:
澳大利亚研究理事会;
关键词:
Model uncertainty;
Model mis-specification;
Maximum a posteriori probability;
NORMALIZING CONSTANTS;
MODEL;
D O I:
10.1016/j.cogsys.2024.101232
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Whenever someone in a team tries to help others, it is crucial that they have some understanding of other team members' goals. In modern teams, this applies equally to human and artificial ("bot") assistants. Understanding when one does not know something is crucial for stopping the execution of inappropriate behavior and, ideally, attempting to learn more appropriate actions. From a statistical point of view, this can be translated to assessing whether none of the hypotheses in a considered set is correct. Here we investigate a novel approach for making this assessment based on monitoring the maximum a posteriori probability (MAP) of a set of candidate hypotheses as new observations arrive. Simulation studies suggest that this is a promising approach, however, we also caution that there may be cases where this is more challenging. The problem we study and the solution we propose are general, with applications well beyond human-bot teaming, including for example the scientific process of theory development.
引用
收藏
页数:9
相关论文