Grounding Ontologies with Pre-Trained Large Language Models for Activity Based Intelligence

被引:0
|
作者
Azim, Anee [1 ]
Clark, Leon [1 ]
Lau, Caleb [1 ]
Cobb, Miles [2 ]
Jenner, Kendall [1 ]
机构
[1] Lockheed Martin Australia, STELaRLab, Melbourne, Vic, Australia
[2] Lockheed Martin Space, Sunnyvale, CA USA
关键词
Activity Based Intelligence; Ontology; Large Language Model; Track Association;
D O I
10.1117/12.3013332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The development of Activity Based Intelligence (ABI) requires an understanding of individual actors' intents, their interactions with other entities in the environment, and how these interactions facilitate accomplishment of their goals. Statistical modelling alone is insufficient for such analyses, mandating higher-level representations such as ontology to capture important relationships. However, constructing ontologies for ABI, ensuring they remain grounded to real-world entities, and maintaining their applicability to downstream tasks requires substantial hand-tooling by domain experts. In this paper, we propose the use of a Large Language Model (LLM) to bootstrap a grounding for such an ontology. Subsequently, we demonstrate that the experience encoded within the weights of a pre-trained LLM can be used in a zero-shot manner to provide a model of normalcy, enabling ABI analysis at the semantics level, agnostic to the precise coordinate data. This is accomplished through a sequence of two transformations, made upon a kinematic track, toward natural language narratives suitable for LLM input. The first transformation generates an abstraction of the low-level kinematic track, embedding it within a knowledge graph using a domain-specific ABI ontology. Secondly, we employ a template-driven narrative generation process to form natural language descriptions of behavior. Computation of the LLM perplexity score upon these narratives achieves grounding of the ontology. This use does not rely on any prompt engineering. In characterizing the perplexity score for any given track, we observe significant variability given chosen parameters such as sentence verbosity, attribute count, clause ordering, and so on. Consequently, we propose an approach that considers multiple generated narratives for an individual track and the distribution of perplexity scores for downstream applications. We demonstrate the successful application of this methodology against a semantic track association task. Our subsequent analysis establishes how such an approach can be used to augment existing kinematics-based association algorithms.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [22] Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding
    Shimomoto, Erica K.
    Marrese-Taylor, Edison
    Takamur, Hiroya
    Kobayashi, Ichiro
    Nakayama, Hideki
    Miyao, Yusuke
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 13101 - 13123
  • [23] Adopting Pre-trained Large Language Models for Regional Language Tasks: A Case Study
    Gaikwad, Harsha
    Kiwelekar, Arvind
    Laddha, Manjushree
    Shahare, Shashank
    INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 2024, 14531 : 15 - 25
  • [24] Synergizing Large Language Models and Pre-Trained Smaller Models for Conversational Intent Discovery
    Liang, Jinggui
    Liao, Lizi
    Fei, Hao
    Jiang, Jing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 14133 - 14147
  • [25] Improving Pre-trained Vision-and-Language Embeddings for Phrase Grounding
    Dou, Zi-Yi
    Peng, Nanyun
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 6362 - 6371
  • [26] Clinical efficacy of pre-trained large language models through the lens of aphasia
    Cong, Yan
    Lacroix, Arianna N.
    Lee, Jiyeon
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [27] Editorial for Special Issue on Pre-trained Large Language Models for Information Processing
    Wang, Bin
    Kawahara, Tatsuya
    Li, Haizhou
    Meng, Helen
    Wu, Chung-Hsien
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2024, 13 (02)
  • [28] The Use and Misuse of Pre-Trained Generative Large Language Models in Reliability Engineering
    Hu, Yunwei
    Goktas, Yavuz
    Yellamati, David Deepak
    De Tassigny, Catherine
    2024 ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM, RAMS, 2024,
  • [29] NtNDet: Hardware Trojan detection based on pre-trained language models
    Kuang, Shijie
    Quan, Zhe
    Xie, Guoqi
    Cai, Xiaomin
    Li, Keqin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 271
  • [30] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121