MAG-Vision: A Vision Transformer Backbone for Magnetic Material Modeling

被引:0
|
作者
Zhang, Rui [1 ]
Shen, Lei [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310018, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformers; Magnetic hysteresis; Magnetic cores; Training; Magnetic flux; Core loss; Complexity theory; Magnetic materials; Vectors; Saturation magnetization; deep learning; hysteresis loop; power magnetics; vision Transformer (ViT);
D O I
10.1109/TMAG.2025.3527486
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The neural network-based method for modeling magnetic materials enables the estimation of hysteresis B-H loop and core loss across a wide operation range. Transformers are neural networks widely used in sequence-to-sequence tasks. The classical Transformer modeling method suffers from high per-layer complexity and long recurrent inference time when dealing with long sequences. While down-sampling methods can mitigate these issues, they often sacrifice modeling accuracy. In this study, we propose MAG-Vision, which employs a vision Transformer (ViT) as the backbone for magnetic material modeling. It can shorten waveform sequences with minimal loss of information. We trained the network using the open-source magnetic core loss dataset MagNet. Experimental results demonstrate that MAG-Vision performs well in estimating hysteresis B-H loop and magnetic core losses. The average relative error of magnetic core losses for most materials is less than 2%. Experiments are designed to compare MAG-Vision with different network structures to validate its advantages in accuracy, training speed, and inference time.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Adder Attention for Vision Transformer
    Shu, Han
    Wang, Jiahao
    Chen, Hanting
    Li, Lin
    Yang, Yujiu
    Wang, Yunhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [32] Spiking Convolutional Vision Transformer
    Talafha, Sameerah
    Rekabdar, Banafsheh
    Mousas, Christos
    Ekenna, Chinwe
    2023 IEEE 17TH INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, ICSC, 2023, : 225 - 226
  • [33] Towards Robust Vision Transformer
    Mao, Xiaofeng
    Qi, Gege
    Chen, Yuefeng
    Li, Xiaodan
    Duan, Ranjie
    Ye, Shaokai
    He, Yuan
    Xue, Hui
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12032 - 12041
  • [34] Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection
    Fang, Yuxin
    Yang, Shusheng
    Wang, Shijie
    Ge, Yixiao
    Shan, Ying
    Wang, Xinggang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 6221 - 6230
  • [35] Probabilistic Modeling Ensemble Vision Transformer Improves Complex Polyp Segmentation
    Ling, Tianyi
    Wu, Chengyi
    Yu, Huan
    Cai, Tian
    Wang, Da
    Zhou, Yincong
    Chen, Ming
    Ding, Kefeng
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT VII, 2023, 14226 : 572 - 581
  • [36] Modeling vision
    Neumann, Heiko
    COGNITIVE PROCESSING, 2014, 15 (01) : S25 - S25
  • [37] Deep magnetic resonance fingerprinting based on Local and Global Vision Transformer
    Li, Peng
    Hu, Yue
    MEDICAL IMAGE ANALYSIS, 2024, 95
  • [38] DEEP MAGNETIC RESONANCE FINGERPRINTING BASED ON LOCAL AND GLOBAL VISION TRANSFORMER
    Li, Peng
    Hu, Yue
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [39] Magnetic Vision
    Helmuth, Laura
    SCIENTIFIC AMERICAN, 2022, 326 (04) : 4 - 4
  • [40] VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks
    Chu, Xiangxiang
    Su, Jianlin
    Zhang, Bo
    Shen, Chunhua
    COMPUTER VISION - ECCV 2024, PT LXVI, 2025, 15124 : 1 - 18