MAG-Vision: A Vision Transformer Backbone for Magnetic Material Modeling

被引:0
|
作者
Zhang, Rui [1 ]
Shen, Lei [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310018, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformers; Magnetic hysteresis; Magnetic cores; Training; Magnetic flux; Core loss; Complexity theory; Magnetic materials; Vectors; Saturation magnetization; deep learning; hysteresis loop; power magnetics; vision Transformer (ViT);
D O I
10.1109/TMAG.2025.3527486
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The neural network-based method for modeling magnetic materials enables the estimation of hysteresis B-H loop and core loss across a wide operation range. Transformers are neural networks widely used in sequence-to-sequence tasks. The classical Transformer modeling method suffers from high per-layer complexity and long recurrent inference time when dealing with long sequences. While down-sampling methods can mitigate these issues, they often sacrifice modeling accuracy. In this study, we propose MAG-Vision, which employs a vision Transformer (ViT) as the backbone for magnetic material modeling. It can shorten waveform sequences with minimal loss of information. We trained the network using the open-source magnetic core loss dataset MagNet. Experimental results demonstrate that MAG-Vision performs well in estimating hysteresis B-H loop and magnetic core losses. The average relative error of magnetic core losses for most materials is less than 2%. Experiments are designed to compare MAG-Vision with different network structures to validate its advantages in accuracy, training speed, and inference time.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Vision-based detection of MAG weld pool
    高进强
    武传松
    张敏
    赵衍华
    China Welding, 2007, (01) : 32 - 35
  • [42] MIMTracking: Masked image modeling enhanced vision transformer for visual object tracking
    Zhang, Shuo
    Zhang, Dan
    Zou, Qi
    NEUROCOMPUTING, 2024, 606
  • [43] Centroid-Centered Modeling for Efficient Vision Transformer Pre-Training
    Van, Xin
    Li, Zuchao
    Zhang, Lefei
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT IV, 2025, 15034 : 465 - 479
  • [44] ViTO: Vision Transformer-Operator
    Ovadia, Oded
    Kahana, Adar
    Stinis, Panos
    Turkel, Eli
    Givoli, Dan
    Karniadakis, George Em
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2024, 428
  • [45] Vision Transformer in Industrial Visual Inspection
    Hutten, Nils
    Meyes, Richard
    Meisen, Tobias
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [46] Video Summarization With Spatiotemporal Vision Transformer
    Hsu, Tzu-Chun
    Liao, Yi-Sheng
    Huang, Chun-Rong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3013 - 3026
  • [47] Ensemble Vision Transformer for Dementia Diagnosis
    Huang, Fei
    Qiu, Anqi
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5551 - 5561
  • [48] Self-slimmed Vision Transformer
    Zong, Zhuofan
    Li, Kunchang
    Song, Guanglu
    Wang, Yali
    Qiao, Yu
    Leng, Biao
    Liu, Yu
    COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 : 432 - 448
  • [49] Survey of Transformer Research in Computer Vision
    Li, Xiang
    Zhang, Tao
    Zhang, Zhe
    Wei, Hongyang
    Qian, Yurong
    Computer Engineering and Applications, 2023, 59 (01) : 1 - 14
  • [50] ViTAS: Vision Transformer Architecture Search
    Su, Xiu
    You, Shan
    Xie, Jiyang
    Zheng, Mingkai
    Wang, Fei
    Qian, Chen
    Zhang, Changshui
    Wang, Xiaogang
    Xu, Chang
    COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 139 - 157