Image rectangling network based on reparameterized transformer and assisted learning

被引:0
|
作者
Yang, Lichun [1 ]
Tian, Bin [2 ]
Zhang, Tianyin [1 ]
Yong, Jiu [3 ]
Dang, Jianwu [2 ]
机构
[1] Lanzhou Jiaotong Univ, Key Lab Optoelect Technol & Intelligent Control, Minist Educ, Lanzhou, Gansu, Peoples R China
[2] Lanzhou Jiaotong Univ, Coll Elect & Informat Engn, Lanzhou, Gansu, Peoples R China
[3] Lanzhou Jiaotong Univ, Natl Virtual Simulat Expt Teaching Ctr Railway Tra, Lanzhou 730070, Peoples R China
基金
中国国家自然科学基金;
关键词
Image rectangling; Single wrap; Re-parameterization; Assisted learning;
D O I
10.1038/s41598-024-56589-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Stitched images can offer a broader field of view, but their boundaries can be irregular and unpleasant. To address this issue, current methods for rectangling images start by distorting local grids multiple times to obtain rectangular images with regular boundaries. However, these methods can result in content distortion and missing boundary information. We have developed an image rectangling solution using the reparameterized transformer structure, focusing on single distortion. Additionally, we have designed an assisted learning network to aid in the process of the image rectangling network. To improve the network's parallel efficiency, we have introduced a local thin-plate spline Transform strategy to achieve efficient local deformation. Ultimately, the proposed method achieves state-of-the-art performance in stitched image rectangling with a low number of parameters while maintaining high content fidelity. The code is available at https://github.com/MelodYanglc/TransRectangling.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Image rectangling network based on reparameterized transformer and assisted learning
    Lichun Yang
    Bin Tian
    Tianyin Zhang
    Jiu Yong
    Jianwu Dang
    Scientific Reports, 14
  • [2] Deep Rectangling for Image Stitching: A Learning Baseline
    Nie, Lang
    Lin, Chunyu
    Liao, Kang
    Liu, Shuaicheng
    Zhao, Yao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5730 - 5738
  • [3] Learning Contextual Transformer Network for Image Inpainting
    Deng, Ye
    Hui, Siqi
    Zhou, Sanping
    Meng, Deyu
    Wang, Jinjun
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2529 - 2538
  • [4] A Multi-Stage Transformer Network for Image Dehazing Based on Contrastive Learning
    Gao F.
    Ji S.
    Guo J.
    Hou J.
    Ouyang C.
    Yang B.
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2023, 57 (01): : 195 - 210
  • [5] Learning A Sparse Transformer Network for Effective Image Deraining
    Chen, Xiang
    Li, Hao
    Li, Mingqiang
    Pan, Jinshan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 5896 - 5905
  • [6] Remote Sensing Image Rectangling With Iterative Warping Kernel Self-Correction Transformer
    Qiu, Linwei
    Xie, Fengying
    Liu, Chang
    Wang, Ke
    Song, Xuedong
    Shi, Zhenwei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [7] WTFusion: Wavelet-Assisted Transformer Network for Multisensor Image Fusion
    Li, Xiaoling
    Li, Yanfeng
    Chen, Houjin
    Sun, Jia
    Wang, Minjun
    Chen, Luyifu
    IEEE SENSORS JOURNAL, 2024, 24 (22) : 37152 - 37168
  • [8] Swin Transformer Assisted Prior Attention Network for Medical Image Segmentation
    Liao, Zhihao
    Fan, Neng
    Xu, Kai
    APPLIED SCIENCES-BASEL, 2022, 12 (09):
  • [9] Learning Texture Transformer Network for Image Super-Resolution
    Yang, Fuzhi
    Yang, Huan
    Fu, Jianlong
    Lu, Hongtao
    Guo, Baining
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5790 - 5799
  • [10] Fine grained image classification network based on transformer bilinear network
    Xiang X.
    Liu Y.
    Zheng B.
    Tan Y.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2024, 52 (02): : 84 - 89