Fusion of satellite and street view data for urban traffic accident hotspot identification

被引:0
|
作者
Guo, Wentong [1 ,2 ,3 ]
Xu, Cheng [4 ]
Jin, Sheng [3 ,5 ,6 ,7 ]
机构
[1] Zhejiang Univ, Polytech Inst, Hangzhou 310058, Peoples R China
[2] Zhejiang Univ, Inst Intelligent Transportat Syst, Hangzhou 310058, Peoples R China
[3] Zhejiang Prov Engn Res Ctr Intelligent Transportat, Hangzhou 310058, Peoples R China
[4] Zhejiang Police Coll, Dept Traff Management Engn, Hangzhou 310053, Peoples R China
[5] Zhejiang Univ, Inst Intelligent Transportat Syst, Coll Civil Engn & Architecture, Hangzhou 310058, Peoples R China
[6] Zhejiang Univ, Zhongyuan Inst, Zhengzhou 450000, Peoples R China
[7] Zhejiang Univ, Anzhong Bldg,Zijingang Campus, Hangzhou 310058, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Accident hotspot; Traffic safety; Remote sensing; Street view; Deep learning; Multimodal fusion; MODEL;
D O I
10.1016/j.jag.2024.103853
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
As the number of vehicles and the volume of traffic swell in urban centers, cities have experienced a concomitant increase in traffic accidents. Proactively identifying accident-prone hotspots in urban environments holds the promise of preventing traffic mishaps, thereby curtailing the incidence of accidents and reducing property damage. This research introduces the Two -Branch Contextual Feature -Guided Converged Network (TCFGC-Net) utilizing multimodal satellite and street view data. Designed to extract global structural features from satellite imagery and dynamic continuous features from street view imagery, the model aims to improve the accuracy of detecting urban accident hotspots. For the satellite imagery branch, we propose the Contextual Feature Coupled Convolutional Neural Network (Trans-CFCCNN) designed to extract global spatial features and discern feature correlations across adjacent regions. For the street view imagery branch, we develop the Sequential Feature Recurrent Attention Network (SFRAN) to assimilate and integrate dynamic scene features captured from successive street view images. We designed the Multi -Branch Feature Adaptive Fusion Structure (MBFAF) to aggregate different branch features for accurate identification of accident hotspots. Experimental results show that the model performs well, with an overall accuracy of 93.7 %. Ablation studies confirm that relative to standalone street view and satellite branch analyses, implementing multimodal fusion enhances the model ' s accuracy by 12.05 % and 17.86 %, respectively. The innovative fusion structure proposed herein garners a 4.22 % increase in model accuracy, outpacing conventional feature concatenation techniques. Furthermore, the model outperforms existing deep learning models in terms of overall efficacy. Additionally, to showcase the efficacy of the proposed model structure, we utilize Class Activation Maps (CAM) to provide visual interpretability for the model. These results suggest that the dual -branch fusion model effectively decreases false alarm occurrences and directs the model ' s focus toward regions more pertinent to accident hotspots. Finally, the code and model used for identifying hotspots of urban traffic accidents in this study are available for access: https://github.com/ gwt-ZJU/TCFGC-Net.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Greener the safer? Effects of urban green space on community safety and perception of safety using satellite and street view imagery data
    He, Qian
    Wu, Ling
    Lee, Claire Seungeun
    Zhu, Chunwu
    Bai, Weishan
    Guo, Weichen
    Ye, Xinyue
    JOURNAL OF CRIMINAL JUSTICE, 2025, 97
  • [22] Analyzing the Influence of Urban Street Greening and Street Buildings on Summertime Air Pollution Based on Street View Image Data
    Wu, Dong
    Gong, Jianhua
    Liang, Jianming
    Sun, Jin
    Zhang, Guoyong
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2020, 9 (09)
  • [23] Intelligent deep fusion network for urban traffic flow anomaly identification
    Djenouri, Youcef
    Belhadi, Asma
    Chen, Hsing-Chung
    Lin, Jerry Chun-Wei
    COMPUTER COMMUNICATIONS, 2022, 189 : 175 - 181
  • [24] Measuring Greenspace in Rural Areas for Studies of Birth Outcomes: A Comparison of Street View Data and Satellite Data
    Shi, Xun
    Zhang, Fan
    Chipman, Jonathan W.
    Li, Meifang
    Khatchikian, Camilo
    Karagas, Margaret R.
    GEOHEALTH, 2024, 8 (04):
  • [25] Multi-modal fusion of satellite and street-view images for urban village classification based on a dual-branch deep neural network
    Chen, Boan
    Feng, Quanlong
    Niu, Bowen
    Yan, Fengqin
    Gao, Bingbo
    Yang, Jianyu
    Gong, Jianhua
    Liu, Jiantao
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2022, 109
  • [26] Identification analysis model of traffic accident-prone locations based on geographical view angle
    Yuan, Quan
    Li, Yi-Bing
    Lu, Guang-Quan
    Jiaotong Yunshu Gongcheng Xuebao/Journal of Traffic and Transportation Engineering, 2010, 10 (01): : 101 - 105
  • [27] Test Scenario Fusion: How to Fuse Scenarios From Accident and Traffic Observation Data
    Baeumler, Maximilian
    Prokop, Guenther
    IEEE ACCESS, 2024, 12 : 16354 - 16374
  • [28] Efficient Volumetric Fusion of Airborne and Street-Side Data for Urban Reconstruction
    Bodis-Szomoru, Andras
    Riemenschneider, Hayko
    Van Gool, Luc
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 3204 - 3209
  • [29] Extending labeled mobile network traffic data by three levels traffic identification fusion
    Liu, Zhen
    Wang, Ruoyu
    Tang, Deyu
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 88 : 453 - 466
  • [30] Urban planning using data fusion of satellite and aerial photo images
    Cheng, P
    Toutin, T
    IGARSS '97 - 1997 INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, PROCEEDINGS VOLS I-IV: REMOTE SENSING - A SCIENTIFIC VISION FOR SUSTAINABLE DEVELOPMENT, 1997, : 839 - 841