Grounding Ontologies with Pre-Trained Large Language Models for Activity Based Intelligence

被引:0
|
作者
Azim, Anee [1 ]
Clark, Leon [1 ]
Lau, Caleb [1 ]
Cobb, Miles [2 ]
Jenner, Kendall [1 ]
机构
[1] Lockheed Martin Australia, STELaRLab, Melbourne, Vic, Australia
[2] Lockheed Martin Space, Sunnyvale, CA USA
来源
SIGNAL PROCESSING, SENSOR/INFORMATION FUSION, AND TARGET RECOGNITION XXXIII | 2024年 / 13057卷
关键词
Activity Based Intelligence; Ontology; Large Language Model; Track Association;
D O I
10.1117/12.3013332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The development of Activity Based Intelligence (ABI) requires an understanding of individual actors' intents, their interactions with other entities in the environment, and how these interactions facilitate accomplishment of their goals. Statistical modelling alone is insufficient for such analyses, mandating higher-level representations such as ontology to capture important relationships. However, constructing ontologies for ABI, ensuring they remain grounded to real-world entities, and maintaining their applicability to downstream tasks requires substantial hand-tooling by domain experts. In this paper, we propose the use of a Large Language Model (LLM) to bootstrap a grounding for such an ontology. Subsequently, we demonstrate that the experience encoded within the weights of a pre-trained LLM can be used in a zero-shot manner to provide a model of normalcy, enabling ABI analysis at the semantics level, agnostic to the precise coordinate data. This is accomplished through a sequence of two transformations, made upon a kinematic track, toward natural language narratives suitable for LLM input. The first transformation generates an abstraction of the low-level kinematic track, embedding it within a knowledge graph using a domain-specific ABI ontology. Secondly, we employ a template-driven narrative generation process to form natural language descriptions of behavior. Computation of the LLM perplexity score upon these narratives achieves grounding of the ontology. This use does not rely on any prompt engineering. In characterizing the perplexity score for any given track, we observe significant variability given chosen parameters such as sentence verbosity, attribute count, clause ordering, and so on. Consequently, we propose an approach that considers multiple generated narratives for an individual track and the distribution of perplexity scores for downstream applications. We demonstrate the successful application of this methodology against a semantic track association task. Our subsequent analysis establishes how such an approach can be used to augment existing kinematics-based association algorithms.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Soft cosine and extended cosine adaptation for pre-trained language model semantic vector analysis
    Ijebu, Funebi Francis
    Liu, Yuanchao
    Sun, Chengjie
    Usip, Patience Usoro
    APPLIED SOFT COMPUTING, 2025, 169
  • [22] Knowledge Prompt Makes Composed Pre-Trained Models Zero-Shot News Captioner
    Wang, Yanhui
    Xu, Ning
    Tian, Hongshuo
    Lv, Bo
    Duan, YuLong
    Li, Xuanya
    Liu, An-An
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2879 - 2884
  • [23] Utility of artificial intelligence-based large language models in ophthalmic care
    Biswas, Sayantan
    Davies, Leon N.
    Sheppard, Amy L.
    Logan, Nicola S.
    Wolffsohn, James S.
    OPHTHALMIC AND PHYSIOLOGICAL OPTICS, 2024, 44 (03) : 641 - 671
  • [24] On Designing Low-Risk Honeypots Using Generative Pre-Trained Transformer Models With Curated Inputs
    Ragsdale, Jarrod
    Boppana, Rajendra V.
    IEEE ACCESS, 2023, 11 : 117528 - 117545
  • [25] Remote Sensing Image Classification with a Graph-Based Pre-Trained Neighborhood Spatial Relationship
    Guan, Xudong
    Huang, Chong
    Yang, Juan
    Li, Ainong
    SENSORS, 2021, 21 (16)
  • [26] Large language models and artificial intelligence chatbots in vascular surgery
    Lareyre, Fabien
    Nasr, Bahaa
    Poggi, Elise
    Di Lorenzo, Gilles
    Ballaith, Ali
    Sliti, Imen
    Chaudhuri, Arindam
    Raffort, Juliette
    SEMINARS IN VASCULAR SURGERY, 2024, 7 (03) : 314 - 320
  • [27] Leveraging foundation and large language models in medical artificial intelligence
    Wong, Io Nam
    Monteiro, Olivia
    Baptista-Hon, Daniel T.
    Wang, Kai
    Lu, Wenyang
    Sun, Zhuo
    Nie, Sheng
    Yin, Yun
    CHINESE MEDICAL JOURNAL, 2024, 137 (21) : 2529 - 2539
  • [28] Leveraging foundation and large language models in medical artificial intelligence
    Wong Io Nam
    Monteiro Olivia
    BaptistaHon Daniel T
    Wang Kai
    Lu Wenyang
    Sun Zhuo
    Nie Sheng
    Yin Yun
    中华医学杂志英文版, 2024, 137 (21)
  • [29] Play to Your Strengths: Collaborative Intelligence of Conventional Recommender Models and Large Language Models
    Xi, Yunjia
    Liu, Weiwen
    Lin, Jianghao
    Wu, Chuhan
    Chen, Bo
    Tang, Ruiming
    Zhang, Weinan
    Yu, Yong
    INFORMATION RETRIEVAL, CCIR 2024, 2025, 15418 : 1 - 13
  • [30] Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging
    Kim, Kiduk
    Hong, Gil-Sun
    Kim, Namkug
    JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY, 2024, 85 (05): : 848 - 860