Deciphering Human Mobility: Inferring Semantics of Trajectories with Large Language Models

被引:0
|
作者
Luo, Yuxiao [1 ]
Cao, Zhongcai [1 ]
Jin, Xin [1 ]
Liu, Kang [1 ]
Yin, Ling [1 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Human mobility analysis; Large language models; Trajectory semantic inference; TRAVEL; PATTERNS;
D O I
10.1109/MDM61037.2024.00060
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Understanding human mobility patterns is essential for various applications, from urban planning to public safety. The individual trajectory such as mobile phone location data, while rich in spatio-temporal information, often lacks semantic detail, limiting its utility for in-depth mobility analysis. Existing methods can infer basic routine activity sequences from this data, lacking depth in understanding complex human behaviors and users' characteristics. Additionally, they struggle with the dependency on hard-to-obtain auxiliary datasets like travel surveys. To address these limitations, this paper defines trajectory semantic inference through three key dimensions: user occupation category, activity sequence, and trajectory description, and proposes the Trajectory Semantic Inference with Large Language Models (TSI-LLM) framework to leverage LLMs infer trajectory semantics comprehensively and deeply. We adopt spatio-temporal attributes enhanced data formatting (STFormat) and design a context-inclusive prompt, enabling LLMs to more effectively interpret and infer the semantics of trajectory data. Experimental validation on real-world trajectory datasets demonstrates the efficacy of TSI-LLM in deciphering complex human mobility patterns. This study explores the potential of LLMs in enhancing the semantic analysis of trajectory data, paving the way for more sophisticated and accessible human mobility research.
引用
收藏
页码:289 / 294
页数:6
相关论文
共 50 条
  • [21] Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language
    Hosseini, Eghbal A.
    Fedorenko, Evelina
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] Two Discourse Driven Language Models for Semantics
    Peng, Haoruo
    Roth, Dan
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 290 - 300
  • [23] PLANS AND SEMANTICS IN HUMAN PROCESSING OF LANGUAGE
    HAMBURGER, H
    CRAIN, S
    COGNITIVE SCIENCE, 1987, 11 (01) : 101 - 136
  • [24] Language Intent Models for Inferring User Browsing Behavior
    Tsagkias, Manos
    Blanco, Roi
    SIGIR 2012: PROCEEDINGS OF THE 35TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2012, : 335 - 344
  • [25] Inferring the drivers of language change using spatial models
    Burridge, James
    Blaxter, Tamsin
    JOURNAL OF PHYSICS-COMPLEXITY, 2021, 2 (03):
  • [26] Inferring Nighttime Satellite Imagery from Human Mobility
    Dickinson, Brian
    Ghoshal, Gourab
    Dotiwalla, Xerxes
    Sadilek, Adam
    Kautz, Henry
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 394 - 402
  • [27] Large Language Models are Not Models of Natural Language: They are Corpus Models
    Veres, Csaba
    IEEE ACCESS, 2022, 10 : 61970 - 61979
  • [28] Large Language Models as Zero-Shot Human Models for Human-Robot Interaction
    Zhang, Bowen
    Soh, Harold
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 7961 - 7968
  • [29] Large Language Models
    Vargas, Diego Collarana
    Katsamanis, Nassos
    ERCIM NEWS, 2024, (136): : 12 - 13
  • [30] Large Language Models
    Cerf, Vinton G.
    COMMUNICATIONS OF THE ACM, 2023, 66 (08) : 7 - 7