Exploring large language models for human mobility prediction under public events

被引:6
|
作者
Liang, Yuebing [1 ,2 ]
Liu, Yichao [3 ]
Wang, Xiaohan [1 ]
Zhao, Zhan [1 ,4 ,5 ]
机构
[1] Univ Hong Kong, Dept Urban Planning & Design, Hong Kong, Peoples R China
[2] MIT, Senseable City Lab, Cambridge, MA 02139 USA
[3] Tsinghua Univ, Sch Architecture, Beijing, Peoples R China
[4] Univ Hong Kong, Urban Syst Inst, Hong Kong, Peoples R China
[5] Univ Hong Kong, Musketeers Fdn Inst Data Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Public events; Large language models; Human mobility prediction; Travel demand modeling; Text data mining; SUBWAY PASSENGER FLOW;
D O I
10.1016/j.compenvurbsys.2024.102153
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Public events, such as concerts and sports games, can be major attractors for large crowds, leading to irregular surges in travel demand. Accurate human mobility prediction for public events is thus crucial for event planning as well as traffic or crowd management. While rich textual descriptions about public events are commonly available from online sources, it is challenging to encode such information in statistical or machine learning models. Existing methods are generally limited in incorporating textual information, handling data sparsity, or providing rationales for their predictions. To address these challenges, we introduce a framework for human mobility prediction under public events (LLM-MPE) based on Large Language Models (LLMs), leveraging their unprecedented ability to process textual data, learn from minimal examples, and generate human-readable explanations. Specifically, LLM-MPE first transforms raw, unstructured event descriptions from online sources into a standardized format, and then segments historical mobility data into regular and event-related components. A prompting strategy is designed to direct LLMs in making and rationalizing demand predictions considering historical mobility and event features. A case study is conducted for Barclays Center in New York City, based on publicly available event information and taxi trip data. Results show that LLM-MPE surpasses traditional models, particularly on event days, with textual data significantly enhancing its accuracy. Furthermore, LLM-MPE offers interpretable insights into its predictions. Despite the great potential of LLMs, we also identify key challenges including misinformation and high costs that remain barriers to their broader adoption in large-scale human mobility analysis.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Exploring the Role of Large Language Models in Melanoma: A Systematic Review
    Zarfati, Mor
    Nadkarni, Girish N.
    Glicksberg, Benjamin S.
    Harats, Moti
    Greenberger, Shoshana
    Klang, Eyal
    Soffer, Shelly
    JOURNAL OF CLINICAL MEDICINE, 2024, 13 (23)
  • [32] Exploring the Transferability of Visual Prompting for Multimodal Large Language Models
    Zhang, Yichi
    Dong, Yinpeng
    Zhang, Siyuan
    Min, Tianzan
    Su, Hang
    Zhu, Jun
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 26552 - 26562
  • [33] Exploring the role of large language models in radiation emergency response
    Chandra, Anirudh
    Chakraborty, Abinash
    JOURNAL OF RADIOLOGICAL PROTECTION, 2024, 44 (01)
  • [34] Exploring Automated Assertion Generation via Large Language Models
    Zhang, Quanjun
    Sun, Weifeng
    Fang, Chunrong
    Yu, Bowen
    Li, Hongyan
    Yan, Meng
    Zhou, Jianyi
    Chen, Zhenyu
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (03)
  • [35] Exploring Large Language Models as Formative Feedback Tools in Physics
    El-Adawy, Shams
    MacDonagh, Aidan
    Abdelhafez, Mohamed
    2024 PHYSICS EDUCATION RESEARCH CONFERENCE, PERC, 2024, : 126 - 131
  • [36] Exploring Distributional Shifts in Large Language Models for Code Analysis
    Arakelyan, Shushan
    Das, Rocktim Jyoti
    Mao, Yi
    Ren, Xiang
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 16298 - 16314
  • [37] Exploring Reversal Mathematical Reasoning Ability for Large Language Models
    Guo, Pei
    You, Wangjie
    Li, Juntao
    Yan, Bowen
    Zhang, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 13671 - 13685
  • [38] Exploring the applicability of large language models to citation context analysis
    Nishikawa, Kai
    Koshiba, Hitoshi
    SCIENTOMETRICS, 2024, 129 (11) : 6751 - 6777
  • [39] Exploring Spatial Schema Intuitions in Large Language and Vision Models
    Wicke, Philipp
    Wachowiak, Lennart
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 6102 - 6117
  • [40] Exploring Large Language Models to generate Easy to Read content
    Martinez, Paloma
    Ramos, Alberto
    Moreno, Lourdes
    FRONTIERS IN COMPUTER SCIENCE, 2024, 6