Deciphering Human Mobility: Inferring Semantics of Trajectories with Large Language Models

被引:0
|
作者
Luo, Yuxiao [1 ]
Cao, Zhongcai [1 ]
Jin, Xin [1 ]
Liu, Kang [1 ]
Yin, Ling [1 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Human mobility analysis; Large language models; Trajectory semantic inference; TRAVEL; PATTERNS;
D O I
10.1109/MDM61037.2024.00060
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Understanding human mobility patterns is essential for various applications, from urban planning to public safety. The individual trajectory such as mobile phone location data, while rich in spatio-temporal information, often lacks semantic detail, limiting its utility for in-depth mobility analysis. Existing methods can infer basic routine activity sequences from this data, lacking depth in understanding complex human behaviors and users' characteristics. Additionally, they struggle with the dependency on hard-to-obtain auxiliary datasets like travel surveys. To address these limitations, this paper defines trajectory semantic inference through three key dimensions: user occupation category, activity sequence, and trajectory description, and proposes the Trajectory Semantic Inference with Large Language Models (TSI-LLM) framework to leverage LLMs infer trajectory semantics comprehensively and deeply. We adopt spatio-temporal attributes enhanced data formatting (STFormat) and design a context-inclusive prompt, enabling LLMs to more effectively interpret and infer the semantics of trajectory data. Experimental validation on real-world trajectory datasets demonstrates the efficacy of TSI-LLM in deciphering complex human mobility patterns. This study explores the potential of LLMs in enhancing the semantic analysis of trajectory data, paving the way for more sophisticated and accessible human mobility research.
引用
收藏
页码:289 / 294
页数:6
相关论文
共 50 条
  • [31] Deciphering Stereotypes in Pre-Trained Language Models
    Ma, Weicheng
    Scheible, Henry
    Wang, Brian
    Veeramachaneni, Goutham
    Chowdhary, Pratim
    Sung, Alan
    Koulogeorge, Andrew
    Wang, Lili
    Yang, Diyi
    Vosoughi, Soroush
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11328 - 11345
  • [32] Large language models for human-robot interaction: A review
    Zhang, Ceng
    Chen, Junxin
    Li, Jiatong
    Peng, Yanhong
    Mao, Zebing
    BIOMIMETIC INTELLIGENCE AND ROBOTICS, 2023, 3 (04):
  • [33] Conceptual structure coheres in human cognition but not in large language models
    Suresh, Siddharth
    Mukherjee, Kushin
    Yu, Xizheng
    Huang, Wei-Chun
    Padua, Lisa
    Rogers, Timothy T.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 722 - 738
  • [34] Large language models vs human for classifying clinical documents
    Mustafa, Akram
    Naseem, Usman
    Azghadi, Mostafa Rahimi
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2025, 195
  • [35] Homogenization Effects of Large Language Models on Human Creative Ideation
    Anderson, Barrett R.
    Shah, Jash Hemant
    Kreminski, Max
    PROCEEDINGS OF THE 16TH CONFERENCE ON CREATIVITY AND COGNITION, C&C 2024, 2024, : 413 - 425
  • [36] Strong and weak alignment of large language models with human values
    Khamassi, Mehdi
    Nahon, Marceau
    Chatila, Raja
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [37] Frontiers: Can Large Language Models Capture Human Preferences?
    Goli, Ali
    Singh, Amandeep
    MARKETING SCIENCE, 2024, 43 (04)
  • [38] Studying large language models as compression algorithms for human culture
    Buttrick, Nicholas
    TRENDS IN COGNITIVE SCIENCES, 2024, 28 (03) : 187 - 189
  • [39] Can Large Language Models Capture Dissenting Human Voices?
    Lee, Noah
    An, Na Min
    Thorne, James
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4569 - 4585
  • [40] Training Trajectories of Language Models Across Scales
    Xia, Mengzhou
    Artetxe, Mikel
    Zhou, Chunting
    Lin, Xi Victoria
    Pasunuru, Ramakanth
    Chen, Danqi
    Zettlemoyer, Luke
    Stoyanov, Ves
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13711 - 13738