Boosted GAN with Semantically Interpretable Information for Image Inpainting

被引:7
|
作者
Li, Ang [1 ]
Qi, Jianzhong [1 ]
Zhang, Rui [1 ]
Kotagiri, Ramamohanarao [1 ]
机构
[1] Univ Melbourne, Melbourne, Vic, Australia
关键词
image inpainting; GAN; semantic information; image attribute; image segmentation;
D O I
10.1109/ijcnn.2019.8851926
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image inpainting aims at restoring missing regions of corrupted images, which has many applications such as image restoration and object removal. However, current GAN-based inpainting models fail to explicitly consider the semantic consistency between restored images and original images. For example, given a male image with image region of one eye missing, current models may restore it with a female eye. This is due to the ambiguity of GAN-based inpainting models: these models can generate many possible restorations given a missing region. To address this limitation, our key insight is that semantically interpretable information (such as attribute and segmentation information) of input images (with missing regions) can provide essential guidance for the inpainting process. Based on this insight, we propose a boosted GAN with semantically interpretable information for image inpainting that consists of an inpainting network and a discriminative network. The inpainting network utilizes two auxiliary pretrained networks to discover the attribute and segmentation information of input images and incorporates them into the inpainting process to provide explicit semantic-level guidance. The discriminative network adopts a multi-level design that can enforce regularizations not only on overall realness but also on attribute and segmentation consistency with the original images. Experimental results show that our proposed model can preserve consistency on both attribute and segmentation level, and significantly outperforms the state-of-the-art models.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] An Image Inpainting Method Using Information of Damage Region
    Chen, Guoyue
    Zhang, Xingguo
    Nakui, Kazutaka
    Saruta, Kazuki
    Terata, Yuki
    Zhu, Min
    2016 3RD INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI 2016), 2016, : 128 - 132
  • [22] A Structure-Consistency GAN for Unpaired AS-OCT Image Inpainting
    Bai, Guanhua
    Li, Sanqian
    Zhang, He
    Higashita, Risa
    Liu, Jiang
    Li, Jie
    Zhang, Meng
    OPHTHALMIC MEDICAL IMAGE ANALYSIS, OMIA 2023, 2023, 14096 : 142 - 151
  • [23] Multi-scale semantic image inpainting with residual learning and GAN
    Jiao, Libin
    Wu, Hao
    Wang, Haodi
    Bie, Rongfang
    NEUROCOMPUTING, 2019, 331 : 199 - 212
  • [24] DD-GAN: pedestrian image inpainting with simultaneous tone correction
    Yuelong Li
    Tongshun Zhang
    Junyu Bi
    Jianming Wang
    Multimedia Tools and Applications, 2023, 82 : 2503 - 2516
  • [25] MI-GAN: A Simple Baseline for Image Inpainting on Mobile Devices
    Sargsyan, Andranik
    Navasardyan, Shant
    Xu, Xingqian
    Shi, Humphrey
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 7301 - 7311
  • [26] Efficient texture-aware multi-GAN for image inpainting
    Hedjazi, Mohamed Abbas
    Genc, Yakup
    KNOWLEDGE-BASED SYSTEMS, 2021, 217
  • [27] GAN-based face identity feature recovery for image inpainting
    Wang, Yan
    Shin, Jitae
    2022 37TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC 2022), 2022, : 930 - 932
  • [28] Face inpainting based on GAN by facial prediction and fusion as guidance information
    Zhang, Xian
    Shi, Canghong
    Wang, Xin
    Wu, Xi
    Li, Xiaojie
    Lv, Jiancheng
    Mumtaz, Imran
    APPLIED SOFT COMPUTING, 2021, 111 (111)
  • [29] From Augmentation to Inpainting: Improving Visual SLAM With Signal Enhancement Techniques and GAN-Based Image Inpainting
    Theodorou, Charalambos
    Velisavljevic, Vladan
    Dyo, Vladimir
    Nonyelu, Fredi
    IEEE ACCESS, 2024, 12 : 38525 - 38541
  • [30] Image inpainting based on fusion structure information and pixelwise attention
    Wu, Dan
    Cheng, Jixiang
    Li, Zhidan
    Chen, Zhou
    VISUAL COMPUTER, 2024, 40 (12): : 8573 - 8589