Deciphering Human Mobility: Inferring Semantics of Trajectories with Large Language Models

被引:0
|
作者
Luo, Yuxiao [1 ]
Cao, Zhongcai [1 ]
Jin, Xin [1 ]
Liu, Kang [1 ]
Yin, Ling [1 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE 2024 25TH IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT, MDM 2024 | 2024年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Human mobility analysis; Large language models; Trajectory semantic inference; TRAVEL; PATTERNS;
D O I
10.1109/MDM61037.2024.00060
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Understanding human mobility patterns is essential for various applications, from urban planning to public safety. The individual trajectory such as mobile phone location data, while rich in spatio-temporal information, often lacks semantic detail, limiting its utility for in-depth mobility analysis. Existing methods can infer basic routine activity sequences from this data, lacking depth in understanding complex human behaviors and users' characteristics. Additionally, they struggle with the dependency on hard-to-obtain auxiliary datasets like travel surveys. To address these limitations, this paper defines trajectory semantic inference through three key dimensions: user occupation category, activity sequence, and trajectory description, and proposes the Trajectory Semantic Inference with Large Language Models (TSI-LLM) framework to leverage LLMs infer trajectory semantics comprehensively and deeply. We adopt spatio-temporal attributes enhanced data formatting (STFormat) and design a context-inclusive prompt, enabling LLMs to more effectively interpret and infer the semantics of trajectory data. Experimental validation on real-world trajectory datasets demonstrates the efficacy of TSI-LLM in deciphering complex human mobility patterns. This study explores the potential of LLMs in enhancing the semantic analysis of trajectory data, paving the way for more sophisticated and accessible human mobility research.
引用
收藏
页码:289 / 294
页数:6
相关论文
共 50 条
  • [41] A survey on LoRA of large language models
    Mao, Yuren
    Ge, Yuhang
    Fan, Yijiang
    Xu, Wenyi
    Mi, Yu
    Hu, Zhonghao
    Gao, Yunjun
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (07)
  • [42] Meaning and understanding in large language models
    Havlik, Vladimir
    SYNTHESE, 2024, 205 (01)
  • [43] A Survey on Evaluation of Large Language Models
    Chang, Yupeng
    Wang, Xu
    Wang, Jindong
    Wu, Yuan
    Yang, Linyi
    Zhu, Kaijie
    Chen, Hao
    Yi, Xiaoyuan
    Wang, Cunxiang
    Wang, Yidong
    Ye, Wei
    Zhang, Yue
    Chang, Yi
    Yu, Philip S.
    Yang, Qiang
    Xie, Xing
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (03)
  • [44] Emotional intelligence of Large Language Models
    Wang, Xuena
    Li, Xueting
    Yin, Zi
    Wu, Yue
    Liu, Jia
    JOURNAL OF PACIFIC RIM PSYCHOLOGY, 2023, 17
  • [45] Technical foundations of large language models
    Bluethgen, Christian
    RADIOLOGIE, 2025, : 227 - 234
  • [46] Large language models and their applications in bioinformatics
    Sarumi, Oluwafemi A.
    Heider, Dominik
    COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL, 2024, 23 : 3498 - 3505
  • [47] On the Question of Authorship in Large Language Models
    Soos, Carlin
    Haroutunian, Levon
    KNOWLEDGE ORGANIZATION, 2024, 51 (02): : 83 - 95
  • [48] Large language models and linguistic intentionality
    Grindrod, Jumbly
    SYNTHESE, 2024, 204 (02)
  • [49] Process Modeling with Large Language Models
    Kourani, Humam
    Berti, Alessandro
    Schuster, Daniel
    van der Aalst, Wil M. P.
    ENTERPRISE, BUSINESS-PROCESS AND INFORMATION SYSTEMS MODELING, BPMDS 2024, EMMSAD 2024, 2024, 511 : 229 - 244
  • [50] Large Language Models: Their Success and Impact
    Makridakis, Spyros
    Petropoulos, Fotios
    Kang, Yanfei
    FORECASTING, 2023, 5 (03): : 536 - 549