Grounding Ontologies with Pre-Trained Large Language Models for Activity Based Intelligence

被引:0
|
作者
Azim, Anee [1 ]
Clark, Leon [1 ]
Lau, Caleb [1 ]
Cobb, Miles [2 ]
Jenner, Kendall [1 ]
机构
[1] Lockheed Martin Australia, STELaRLab, Melbourne, Vic, Australia
[2] Lockheed Martin Space, Sunnyvale, CA USA
来源
SIGNAL PROCESSING, SENSOR/INFORMATION FUSION, AND TARGET RECOGNITION XXXIII | 2024年 / 13057卷
关键词
Activity Based Intelligence; Ontology; Large Language Model; Track Association;
D O I
10.1117/12.3013332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The development of Activity Based Intelligence (ABI) requires an understanding of individual actors' intents, their interactions with other entities in the environment, and how these interactions facilitate accomplishment of their goals. Statistical modelling alone is insufficient for such analyses, mandating higher-level representations such as ontology to capture important relationships. However, constructing ontologies for ABI, ensuring they remain grounded to real-world entities, and maintaining their applicability to downstream tasks requires substantial hand-tooling by domain experts. In this paper, we propose the use of a Large Language Model (LLM) to bootstrap a grounding for such an ontology. Subsequently, we demonstrate that the experience encoded within the weights of a pre-trained LLM can be used in a zero-shot manner to provide a model of normalcy, enabling ABI analysis at the semantics level, agnostic to the precise coordinate data. This is accomplished through a sequence of two transformations, made upon a kinematic track, toward natural language narratives suitable for LLM input. The first transformation generates an abstraction of the low-level kinematic track, embedding it within a knowledge graph using a domain-specific ABI ontology. Secondly, we employ a template-driven narrative generation process to form natural language descriptions of behavior. Computation of the LLM perplexity score upon these narratives achieves grounding of the ontology. This use does not rely on any prompt engineering. In characterizing the perplexity score for any given track, we observe significant variability given chosen parameters such as sentence verbosity, attribute count, clause ordering, and so on. Consequently, we propose an approach that considers multiple generated narratives for an individual track and the distribution of perplexity scores for downstream applications. We demonstrate the successful application of this methodology against a semantic track association task. Our subsequent analysis establishes how such an approach can be used to augment existing kinematics-based association algorithms.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Automated LOINC Standardization Using Pre-trained Large Language Models
    Tu, Tao
    Loreaux, Eric
    Chesley, Emma
    Lelkes, Adam D.
    Gamble, Paul
    Bellaiche, Mathias
    Seneviratne, Martin
    Chen, Ming-Jun
    MACHINE LEARNING FOR HEALTH, VOL 193, 2022, 193 : 343 - 355
  • [2] SMT Solver Validation Empowered by Large Pre-trained Language Models
    Sun, Maolin
    Yang, Yibiao
    Wang, Yang
    Wen, Ming
    Jia, Haoxiang
    Zhou, Yuming
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 1288 - 1300
  • [3] The Use and Misuse of Pre-Trained Generative Large Language Models in Reliability Engineering
    Hu, Yunwei
    Goktas, Yavuz
    Yellamati, David Deepak
    De Tassigny, Catherine
    2024 ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM, RAMS, 2024,
  • [4] NtNDet: Hardware Trojan detection based on pre-trained language models
    Kuang, Shijie
    Quan, Zhe
    Xie, Guoqi
    Cai, Xiaomin
    Li, Keqin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 271
  • [5] Training-Free Video Temporal Grounding Using Large-Scale Pre-trained Models
    Zheng, Minghang
    Cai, Xinhao
    Chen, Qingchao
    Peng, Yuxin
    Liu, Yang
    COMPUTER VISION-ECCV 2024, PT LXXXII, 2025, 15140 : 20 - 37
  • [6] Optimizing and Evaluating Pre-Trained Large Language Models for Alzheimer's Disease Detection
    Casu, Filippo
    Grosso, Enrico
    Lagorio, Andrea
    Trunfio, Giuseppe A.
    2024 32ND EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING, PDP 2024, 2024, : 277 - 284
  • [7] Effective test generation using pre-trained Large Language Models and mutation testing
    Dakhel, Arghavan Moradi
    Nikanjam, Amin
    Majdinasab, Vahid
    Khomh, Foutse
    Desmarais, Michel C.
    INFORMATION AND SOFTWARE TECHNOLOGY, 2024, 171
  • [8] SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
    Zhong, Shanshan
    Huang, Zhongzhan
    Wen, Wushao
    Qin, Jinghui
    Lin, Liang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 567 - 578
  • [9] Empirical study on fine-tuning pre-trained large language models for fault diagnosis of complex systems
    Zheng, Shuwen
    Pan, Kai
    Liu, Jie
    Chen, Yunxia
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2024, 252
  • [10] Toward the Adoption of Explainable Pre-Trained Large Language Models for Classifying Human-Written and AI-Generated Sentences
    Petrillo, Luca
    Martinelli, Fabio
    Santone, Antonella
    Mercaldo, Francesco
    ELECTRONICS, 2024, 13 (20)