Grounding Ontologies with Pre-Trained Large Language Models for Activity Based Intelligence

被引:0
|
作者
Azim, Anee [1 ]
Clark, Leon [1 ]
Lau, Caleb [1 ]
Cobb, Miles [2 ]
Jenner, Kendall [1 ]
机构
[1] Lockheed Martin Australia, STELaRLab, Melbourne, Vic, Australia
[2] Lockheed Martin Space, Sunnyvale, CA USA
来源
SIGNAL PROCESSING, SENSOR/INFORMATION FUSION, AND TARGET RECOGNITION XXXIII | 2024年 / 13057卷
关键词
Activity Based Intelligence; Ontology; Large Language Model; Track Association;
D O I
10.1117/12.3013332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The development of Activity Based Intelligence (ABI) requires an understanding of individual actors' intents, their interactions with other entities in the environment, and how these interactions facilitate accomplishment of their goals. Statistical modelling alone is insufficient for such analyses, mandating higher-level representations such as ontology to capture important relationships. However, constructing ontologies for ABI, ensuring they remain grounded to real-world entities, and maintaining their applicability to downstream tasks requires substantial hand-tooling by domain experts. In this paper, we propose the use of a Large Language Model (LLM) to bootstrap a grounding for such an ontology. Subsequently, we demonstrate that the experience encoded within the weights of a pre-trained LLM can be used in a zero-shot manner to provide a model of normalcy, enabling ABI analysis at the semantics level, agnostic to the precise coordinate data. This is accomplished through a sequence of two transformations, made upon a kinematic track, toward natural language narratives suitable for LLM input. The first transformation generates an abstraction of the low-level kinematic track, embedding it within a knowledge graph using a domain-specific ABI ontology. Secondly, we employ a template-driven narrative generation process to form natural language descriptions of behavior. Computation of the LLM perplexity score upon these narratives achieves grounding of the ontology. This use does not rely on any prompt engineering. In characterizing the perplexity score for any given track, we observe significant variability given chosen parameters such as sentence verbosity, attribute count, clause ordering, and so on. Consequently, we propose an approach that considers multiple generated narratives for an individual track and the distribution of perplexity scores for downstream applications. We demonstrate the successful application of this methodology against a semantic track association task. Our subsequent analysis establishes how such an approach can be used to augment existing kinematics-based association algorithms.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] The Artificial intelligence large language models and neuropsychiatry practice and research ethic
    Zhong, Yi
    Chen, Yu-jun
    Zhou, Yang
    Lyu, Yan-Ao-Hai
    Yin, Jia-Jun
    Gao, Yu-jun
    ASIAN JOURNAL OF PSYCHIATRY, 2023, 84
  • [32] Artificial Intelligence in Dental Education: Opportunities and Challenges of Large Language Models and Multimodal Foundation Models
    Claman, Daniel
    Sezgin, Emre
    JMIR MEDICAL EDUCATION, 2024, 10
  • [33] Artificial evolutionary intelligence (AEI): evolutionary computation evolves with large language models
    He, Cheng
    Tian, Ye
    Lu, Zhichao
    JOURNAL OF MEMBRANE COMPUTING, 2024,
  • [34] Enhancing video temporal grounding with large language model-based data augmentation
    Tian, Yun
    Guo, Xiaobo
    Wang, Jinsong
    Li, Bin
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (05)
  • [35] Large language models: Are artificial intelligence-based chatbots a reliable source of patient information for spinal surgery?
    Stroop, Anna
    Stroop, Tabea
    Alsofy, Samer Zawy
    Nakamura, Makoto
    Moellmann, Frank
    Greiner, Christoph
    Stroop, Ralf
    EUROPEAN SPINE JOURNAL, 2024, 33 (11) : 4135 - 4143
  • [36] Automated data-driven building energy load prediction method based on generative pre-trained transformers (GPT)
    Zhang, Chaobo
    Zhang, Jian
    Zhao, Yang
    Lu, Jie
    ENERGY, 2025, 318
  • [37] Intelligent design and optimization system for shear wall structures based on large language models and generative artificial intelligence
    Qin, Sizhong
    Guan, Hong
    Liao, Wenjie
    Gu, Yi
    Zheng, Zhe
    Xue, Hongjing
    Lu, Xinzheng
    JOURNAL OF BUILDING ENGINEERING, 2024, 95
  • [38] The Convergence of Open Data, Linked Data, Ontologies, and Large Language Models: Enabling Next-Generation Knowledge Systems
    Cigliano, Andrea
    Fallucchi, Francesca
    METADATA AND SEMANTIC RESEARCH, MTSR 2024, 2025, 2331 : 197 - 213
  • [39] Deployment and Comparison of Large Language Models Based on Virtual Cluster
    Li, Kai
    Cao, Rongqiang
    Wan, Meng
    Wang, Xiaoguang
    Wang, Zongguo
    Wang, Jue
    Wang, Yangang
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 : 359 - 365
  • [40] Generation and Validation of Teaching Examples Based on Large Language Models
    He, Qing
    Wang, Yu
    Rao, Gaoqi
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 389 - 395