Coalition Situational Understanding via Explainable Neuro-Symbolic Reasoning and Learning

被引:0
|
作者
Preece, Alun [1 ]
Braines, Dave [1 ,2 ]
Cerutti, Federico [1 ,3 ]
Furby, Jack [1 ]
Hiley, Liam [1 ]
Kaplan, Lance [4 ]
Law, Mark [5 ]
Russo, Alessandra [5 ]
Srivastava, Mani [6 ]
Vilamala, Marc Roig [1 ]
Xing, Tianwei [6 ]
机构
[1] Cardiff Univ, Cardiff, Wales
[2] IBM Res Europe, Warrington, Cheshire, England
[3] Univ Brescia, Brescia, Italy
[4] DEVCOM Army Res Lab, Adelphi, MD USA
[5] Imperial Coll London, London, England
[6] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
来源
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS III | 2021年 / 11746卷
关键词
situational understanding; coalition; artificial intelligence; machine learning; machine reasoning; explainability;
D O I
10.1117/12.2587850
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Achieving coalition situational understanding (CSU) involves both insight, i.e., recognising existing situations, and foresight, i.e., learning and reasoning to draw inferences about those situations, exploiting assets from across a coalition, including sensor feeds of various modalities, and analytic services. Recent years have seen significant advances in artificial intelligence (AI) and machine learning (ML) technologies applicable to CSU. However, state-of-the-art ML techniques based on deep neural networks require large volumes of training data; unfortunately, representative training examples of situations of interest in CSU are usually sparse. Moreover, to be useful, ML-based analytic services cannot be 'black boxes;' they must be capable of explaining their outputs. In this paper we describe an integrated CSU architecture that combines deep neural networks with symbolic learning and reasoning to address the problem of sparse training data. We also demonstrate how explainability can be achieved for deep neural networks operating on multimodal sensor feeds. We also show how the combined neuro-symbolic system achieves a layered approach to explainability. The work focuses on real-time decision making settings at the tactical edge, with both the symbolic and neural network parts of the system-including the explainabilty approaches-able to deal with temporal features.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] An Interpretable Neuro-symbolic Model for Raven’s Progressive Matrices Reasoning
    Shukuo Zhao
    Hongzhi You
    Ru-Yuan Zhang
    Bailu Si
    Zonglei Zhen
    Xiaohong Wan
    Da-Hui Wang
    Cognitive Computation, 2023, 15 : 1703 - 1724
  • [22] Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces
    Shah, Vishwa
    Sharma, Aditya
    Shroff, Gautam
    Vig, Lovekesh
    Dash, Tirtharaj
    Srinivasan, Ashwin
    NEURAL-SYMBOLIC LEARNING AND REASONING, NESY 2022, 2022, : 142 - 154
  • [23] Neuro-Symbolic Reasoning for Multimodal Referring Expression Comprehension in HMI Systems
    Jain, Aman
    Kondapally, Anirudh Reddy
    Yamada, Kentaro
    Yanaka, Hitomi
    NEW GENERATION COMPUTING, 2024, 42 (04) : 579 - 598
  • [24] APPLYING DIFFERENT LEARNING RULES IN NEURO-SYMBOLIC INTEGRATION
    Sathasivam, Saratha
    MATERIALS SCIENCE AND INFORMATION TECHNOLOGY, PTS 1-8, 2012, 433-440 : 716 - 720
  • [25] Neurules and connectionist expert systems: Unexplored neuro-symbolic reasoning aspects
    Prentzas, Jim
    Hatzilygeroudis, Ioannis
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2021, 15 (04): : 761 - 777
  • [26] An energy-based model for neuro-symbolic reasoning on knowledge graphs
    Doldy, Dominik
    Garridoy, Josep Soler
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 916 - 921
  • [27] autoBOT: evolving neuro-symbolic representations for explainable low resource text classification
    Skrlj, Blaz
    Martinc, Matej
    Lavrac, Nada
    Pollak, Senja
    MACHINE LEARNING, 2021, 110 (05) : 989 - 1028
  • [28] Improving the Integration of Neuro-Symbolic Rules with Case-Based Reasoning
    Prentzas, Jim
    Hatzilygeroudis, Ioannis
    Michail, Othon
    ARTIFICIAL INTELLIGENCE: THEORIES, MODELS AND APPLICATIONS, SETN 2008, 2008, 5138 : 377 - 382
  • [29] Differentiable Neuro-Symbolic Reasoning on Large-Scale Knowledge Graphs
    Chen, Shengyuan
    Cai, Yunfeng
    Fang, Huang
    Huang, Xiao
    Sun, Mingming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [30] An Interpretable Neuro-symbolic Model for Raven's Progressive Matrices Reasoning
    Zhao, Shukuo
    You, Hongzhi
    Zhang, Ru-Yuan
    Si, Bailu
    Zhen, Zonglei
    Wan, Xiaohong
    Wang, Da-Hui
    COGNITIVE COMPUTATION, 2023, 15 (05) : 1703 - 1724