A vision transformer for lightning intensity estimation using 3D weather radar

被引:5
|
作者
Lu, Mingyue [1 ,5 ]
Wang, Menglong [1 ,5 ]
Zhang, Qian [2 ]
Yu, Manzhu [3 ]
He, Caifen [4 ]
Zhang, Yadong [1 ,5 ]
Li, Yuchen [1 ,5 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Collaborat Innovat Ctr Forecast & Evaluat Meteorol, Nanjing 210044, Peoples R China
[2] Xian Univ Finance & Econ, Sch Management Engn, Xian 710100, Peoples R China
[3] Penn State Univ, Dept Geog, University Pk, PA 16802 USA
[4] Ningbo Zhenhai Dist Meteorol Bur, Ningbo 315012, Peoples R China
[5] Nanjing Univ Informat Sci & Technol, Geog Sci Coll, Nanjing 210044, Peoples R China
关键词
Lightning intensity estimation; 3D weather radar; Vision transformer; SMOTE; Multicategoryclassification; TROPICAL CYCLONE INTENSITY;
D O I
10.1016/j.scitotenv.2022.158496
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Lightning has strong destructive powers; its blast wave, high temperature, and high voltage can pose a great threat to human production, life, and personal safety. The destructive power of high-intensity lightning is much greater than that of low-intensity lightning. The estimation of lightning intensity can provide an important reference for determin-ing the lightning protection level and lightning disaster risk assessment. Lightning is a type of small-scale severe con-vective weather phenomenon. Weather radar is one of the best monitoring systems that can frequently sample the detailed three-dimensional (3D) structures of convective storms, with a small spatial scale and short lifetime at high temporal and spatial resolutions. Therefore, it is possible to extract the 3D spatial feature strongly correlated with light-ning from 3D weather radar for estimating lightning intensity. This paper proposes a Vision Transformer model for lightning intensity estimation that can automatically estimate lightning intensity from 3D weather radar data. In an experiment, we transferred the task of estimating lightning intensity into a multicategory classification task. A frame-work was designed to produce lightning feature samples for model input from 3D weather radar and lightning location data. Then, the Synthetic Minority Over-Sampling Technique (SMOTE) algorithm was used to balance and optimize the sample distribution. Finally, samples were input into the proposed lightning intensity estimation model based on Vision Transformer for training and evaluation. Experimental results show that the proposed model based on Vision Transformers performs well with lightning intensity estimation.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Processing of 3D weather radar data with application for assimilation in the NWP model
    Osrodka, Katarzyna
    Szturc, Jan
    Jakubiak, Bogumil
    Jurczyk, Anna
    MISCELLANEA GEOGRAPHICA, 2014, 18 (03): : 31 - 39
  • [32] 3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment
    Zhu, Ziyu
    Ma, Xiaojian
    Chen, Yixin
    Deng, Zhidong
    Huang, Siyuan
    Li, Qing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2899 - 2909
  • [33] Radar-camera fusion for 3D object detection with aggregation transformer
    Li, Jun
    Zhang, Han
    Wu, Zizhang
    Xu, Tianhao
    APPLIED INTELLIGENCE, 2024, 54 (21) : 10627 - 10639
  • [34] Accuracy estimation of a new omnidirectional 3D vision sensor
    Orghidan, R
    Salvi, J
    Mouaddib, EM
    2005 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), VOLS 1-5, 2005, : 2745 - 2748
  • [35] Multi-view convolutional vision transformer for 3D object recognition
    Li, Jie
    Liu, Zhao
    Li, Li
    Lin, Junqin
    Yao, Jian
    Tu, Jingmin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [36] Bootstrapping vision-language transformer for monocular 3D visual grounding
    Lei, Qi
    Sun, Shijie
    Song, Xiangyu
    Song, Huansheng
    Feng, Mingtao
    Wu, Chengzhong
    IET IMAGE PROCESSING, 2025, 19 (01)
  • [37] Learning 3D Face Representation with Vision Transformer for Masked Face Recognition
    Wang, Yuan
    Yang, Zhen
    Zhang, Zhiqiang
    Zang, Huaijuan
    Zhu, Qiang
    Zhan, Shu
    2022 ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING (CACML 2022), 2022, : 505 - 511
  • [38] GraFormer: Graph-oriented Transformer for 3D Pose Estimation
    Zhao, Weixi
    Wang, Weiqiang
    Tian, Yunjie
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 20406 - 20415
  • [39] Dual-Path Transformer for 3D Human Pose Estimation
    Zhou, Lu
    Chen, Yingying
    Wang, Jinqiao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3260 - 3270
  • [40] DGFormer: Dynamic graph transformer for 3D human pose estimation
    Chen, Zhangmeng
    Dai, Ju
    Bai, Junxuan
    Pan, Junjun
    PATTERN RECOGNITION, 2024, 152