Image inpainting for periodic discrete density defects via frequency analysis and an adaptive transformer-GAN network

被引:1
|
作者
Ding, Hui [1 ,2 ]
Huang, Yuhan [1 ]
Chen, Nianzhe [1 ]
Lu, Jiacheng [1 ]
Li, Shaochun [3 ,4 ,5 ]
机构
[1] Capital Normal Univ, Coll Informat Engn, Beijing, Peoples R China
[2] Beijing Adv Innovat Ctr Imaging Technol, Beijing, Peoples R China
[3] Nanjing Univ, Sch Phys, Natl Lab Solid State Microstruct, Nanjing, Peoples R China
[4] Nanjing Univ, Collaborat Innovat Ctr Adv Microstruct, Nanjing, Peoples R China
[5] Nanjing Univ, Jiangsu Prov Key Lab Nanotechnol, Nanjing, Angola
关键词
Image inpainting; Adaptive window attention; Frequency domain information prior; Periodic discrete density defect;
D O I
10.1016/j.asoc.2024.112410
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image inpainting based on deep learning has made significant progress in addressing regular and coherent irregular defects. However, little has been studied on periodic discrete density (PDD) defects that are prevalent in microscopic images obtained by advanced instruments like transmission electron microscopes (TEM) and scanning tunneling microscopes (STM). The PDD defects usually introduce low-frequency noise in the fast Fourier transform (FFT) images, preventing the extraction of useful information particularly in the low- frequency regions. Despite its significant impact, no method has been reported to date to efficiently remove the PDD-induced noise from the FFT of high-resolution microscopic images. In this study, we introduced a novel GAN-based two-stage network (FGTNet), a novel coarse-to-fine inpainting framework, which is built upon the architecture of Generative Adversarial Networks (GAN) and transformer blocks. By integrating the information from both frequency and spatial domains, contextual structures are preserved and high-frequency details are generated in our method. We also proposed an adaptive-window transformer block (A-LeWin) to enhance the spatial feature representation and to fully use the information around the defects. To validate our approach, we constructed a specialized microscopic image dataset with 2730 training samples and 105 testing samples. For comparison, we also extended the experiments to the public Describable Texture Dataset (DTD) and coherence defects that are often discussed in the field of image inpainting. The experiment results indicate that our method performs well on six pixel-level and perceptual-level metrics, and shows the best performance and visual effect of coherent texture.
引用
收藏
页数:10
相关论文
共 13 条
  • [1] CTRF-GAN: A Generative Network for Image Inpainting Using CSWin Transformer and Fast Fourier Convolution Residual
    Wu, Hongcheng
    Lin, Guojun
    Lin, Tong
    Zhu, Yanmei
    Wang, Zhisun
    Diao, Haojie
    IEEE ACCESS, 2024, 12 : 156327 - 156336
  • [2] DF3Net: Dual frequency feature fusion network with hierarchical transformer for image inpainting
    Huang, Muqi
    Yu, Wei
    Zhang, Lefei
    INFORMATION FUSION, 2024, 111
  • [3] HEXA-GAN: Skin lesion image inpainting via hexagonal sampling based generative adversarial network
    Bansal, Nidhi
    Sridhar, S.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 89
  • [4] SFformer: Adaptive Sparse and Frequency-Guided Transformer Network for Single Image Derain
    Wang, Xinrui
    Zhang, Hongyun
    Cai, Kecan
    Miao, Duoqian
    Zhang, Qi
    Li, Miao
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VIII, 2025, 15038 : 482 - 496
  • [5] Dual degradation image inpainting method via adaptive feature fusion and U-net network☆
    Chen, Yuantao
    Xia, Runlong
    Yang, Kai
    Zou, Ke
    APPLIED SOFT COMPUTING, 2025, 174
  • [6] Low-light image enhancement via adaptive frequency decomposition network
    Xiwen Liang
    Xiaoyan Chen
    Keying Ren
    Xia Miao
    Zhihui Chen
    Yutao Jin
    Scientific Reports, 13
  • [7] Low-light image enhancement via adaptive frequency decomposition network
    Liang, Xiwen
    Chen, Xiaoyan
    Ren, Keying
    Miao, Xia
    Chen, Zhihui
    Jin, Yutao
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [8] AdaIFL: Adaptive Image Forgery Localization via a Dynamic and Importance-Aware Transformer Network
    Li, Yuxi
    Cheng, Fuyuan
    Yu, Wangbo
    Wang, Guangshuo
    Luo, Guibo
    Zhu, Yuesheng
    COMPUTER VISION-ECCV 2024, PT XLIII, 2025, 15101 : 477 - 493
  • [9] Image–Text Sentiment Analysis Via Context Guided Adaptive Fine-Tuning Transformer
    Xingwang Xiao
    Yuanyuan Pu
    Zhengpeng Zhao
    Rencan Nie
    Dan Xu
    Wenhua Qian
    Hao Wu
    Neural Processing Letters, 2023, 55 : 2103 - 2125
  • [10] Image-Text Sentiment Analysis Via Context Guided Adaptive Fine-Tuning Transformer
    Xiao, Xingwang
    Pu, Yuanyuan
    Zhao, Zhengpeng
    Nie, Rencan
    Xu, Dan
    Qian, Wenhua
    Wu, Hao
    NEURAL PROCESSING LETTERS, 2023, 55 (03) : 2103 - 2125