Grounding Ontologies with Pre-Trained Large Language Models for Activity Based Intelligence

被引:0
作者
Azim, Anee [1 ]
Clark, Leon [1 ]
Lau, Caleb [1 ]
Cobb, Miles [2 ]
Jenner, Kendall [1 ]
机构
[1] Lockheed Martin Australia, STELaRLab, Melbourne, Vic, Australia
[2] Lockheed Martin Space, Sunnyvale, CA USA
来源
SIGNAL PROCESSING, SENSOR/INFORMATION FUSION, AND TARGET RECOGNITION XXXIII | 2024年 / 13057卷
关键词
Activity Based Intelligence; Ontology; Large Language Model; Track Association;
D O I
10.1117/12.3013332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The development of Activity Based Intelligence (ABI) requires an understanding of individual actors' intents, their interactions with other entities in the environment, and how these interactions facilitate accomplishment of their goals. Statistical modelling alone is insufficient for such analyses, mandating higher-level representations such as ontology to capture important relationships. However, constructing ontologies for ABI, ensuring they remain grounded to real-world entities, and maintaining their applicability to downstream tasks requires substantial hand-tooling by domain experts. In this paper, we propose the use of a Large Language Model (LLM) to bootstrap a grounding for such an ontology. Subsequently, we demonstrate that the experience encoded within the weights of a pre-trained LLM can be used in a zero-shot manner to provide a model of normalcy, enabling ABI analysis at the semantics level, agnostic to the precise coordinate data. This is accomplished through a sequence of two transformations, made upon a kinematic track, toward natural language narratives suitable for LLM input. The first transformation generates an abstraction of the low-level kinematic track, embedding it within a knowledge graph using a domain-specific ABI ontology. Secondly, we employ a template-driven narrative generation process to form natural language descriptions of behavior. Computation of the LLM perplexity score upon these narratives achieves grounding of the ontology. This use does not rely on any prompt engineering. In characterizing the perplexity score for any given track, we observe significant variability given chosen parameters such as sentence verbosity, attribute count, clause ordering, and so on. Consequently, we propose an approach that considers multiple generated narratives for an individual track and the distribution of perplexity scores for downstream applications. We demonstrate the successful application of this methodology against a semantic track association task. Our subsequent analysis establishes how such an approach can be used to augment existing kinematics-based association algorithms.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals
    Kim, Kiduk
    Cho, Kyungjin
    Jang, Ryoungwoo
    Kyung, Sunggu
    Lee, Soyoung
    Ham, Sungwon
    Choi, Edward
    Hong, Gil-Sun
    Kim, Namkug
    KOREAN JOURNAL OF RADIOLOGY, 2024, 25 (03) : 224 - 242
  • [42] PreparedLLM: effective pre-pretraining framework for domain-specific large language models
    Chen, Zhou
    Lin, Ming
    Wang, Zimeng
    Zang, Mingrun
    Bai, Yuqi
    BIG EARTH DATA, 2024, 8 (04) : 649 - 672
  • [43] The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis
    Ferrario, Andrea
    Sedlakova, Jana
    Trachsel, Manuel
    JMIR MENTAL HEALTH, 2024, 11
  • [44] Chart Question Answering based on Modality Conversion and Large Language Models
    Liu, Yi-Cheng
    Chu, Wei-Ta
    PROCEEDINGS OF THE FIRST ACM WORKSHOP ON AI-POWERED QUESTION ANSWERING SYSTEMS FOR MULTIMEDIA, AIQAM 2024, 2024, : 19 - 24
  • [45] Data augmentation based on large language models for radiological report classification
    Collado-Montanez, Jaime
    Martin-Valdivia, Maria-Teresa
    Martinez-Camara, Eugenio
    KNOWLEDGE-BASED SYSTEMS, 2025, 308
  • [46] Comparative Analysis of Artificial Intelligence Virtual Assistant and Large Language Models in Post-Operative Care
    Borna, Sahar
    Gomez-Cabello, Cesar A.
    Pressman, Sophia M.
    Haider, Syed Ali
    Sehgal, Ajai
    Leibovich, Bradley C.
    Cole, Dave
    Forte, Antonio Jorge
    EUROPEAN JOURNAL OF INVESTIGATION IN HEALTH PSYCHOLOGY AND EDUCATION, 2024, 14 (05) : 1413 - 1424
  • [47] AOPWIKI-EXPLORER: An interactive graph-based query engine leveraging large language models
    Kumar, Saurav
    Deepika, Deepika
    Slater, Karin
    Kumar, Vikas
    COMPUTATIONAL TOXICOLOGY, 2024, 30
  • [48] BC4LLM: A perspective of trusted artificial intelligence when blockchain meets large language models
    Luo, Haoxiang
    Luo, Jian
    Vasilakos, Athanasios V.
    NEUROCOMPUTING, 2024, 599
  • [49] DrugReAlign: a multisource prompt framework for drug repurposing based on large language models
    Wei, Jinhang
    Zhuo, Linlin
    Fu, Xiangzheng
    Zeng, Xiangxiang
    Wang, Li
    Zou, Quan
    Cao, Dongsheng
    BMC BIOLOGY, 2024, 22 (01)
  • [50] Nanophotonic device design based on large language models: multilayer and metasurface examples
    Kim, Myungjoon
    Park, Hyeonjin
    Shin, Jonghwa
    NANOPHOTONICS, 2025,