Boosted GAN with Semantically Interpretable Information for Image Inpainting

被引:7
|
作者
Li, Ang [1 ]
Qi, Jianzhong [1 ]
Zhang, Rui [1 ]
Kotagiri, Ramamohanarao [1 ]
机构
[1] Univ Melbourne, Melbourne, Vic, Australia
关键词
image inpainting; GAN; semantic information; image attribute; image segmentation;
D O I
10.1109/ijcnn.2019.8851926
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image inpainting aims at restoring missing regions of corrupted images, which has many applications such as image restoration and object removal. However, current GAN-based inpainting models fail to explicitly consider the semantic consistency between restored images and original images. For example, given a male image with image region of one eye missing, current models may restore it with a female eye. This is due to the ambiguity of GAN-based inpainting models: these models can generate many possible restorations given a missing region. To address this limitation, our key insight is that semantically interpretable information (such as attribute and segmentation information) of input images (with missing regions) can provide essential guidance for the inpainting process. Based on this insight, we propose a boosted GAN with semantically interpretable information for image inpainting that consists of an inpainting network and a discriminative network. The inpainting network utilizes two auxiliary pretrained networks to discover the attribute and segmentation information of input images and incorporates them into the inpainting process to provide explicit semantic-level guidance. The discriminative network adopts a multi-level design that can enforce regularizations not only on overall realness but also on attribute and segmentation consistency with the original images. Experimental results show that our proposed model can preserve consistency on both attribute and segmentation level, and significantly outperforms the state-of-the-art models.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network
    Zhang, Zizhao
    Xie, Yuanpu
    Xing, Fuyong
    McGough, Mason
    Yang, Lin
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3549 - 3557
  • [2] PD-GAN: Probabilistic Diverse GAN for Image Inpainting
    Liu, Hongyu
    Wan, Ziyu
    Huang, Wei
    Song, Yibing
    Han, Xintong
    Liao, Jing
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9367 - 9376
  • [3] Semantic Image Inpainting with Boundary Equilibrium GAN
    Jia, Yuhang
    Xing, Yan
    Peng, Cheng
    Jing, Chao
    Shao, Congzhang
    Wang, Yifan
    2019 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION (AIPR 2019), 2019, : 88 - 92
  • [4] High-Fidelity Image Inpainting with GAN Inversion
    Yu, Yongsheng
    Zhang, Libo
    Fan, Heng
    Luo, Tiejian
    COMPUTER VISION - ECCV 2022, PT XVI, 2022, 13676 : 242 - 258
  • [5] Image inpainting method based on AU-GAN
    Chuangchuang Dong
    Huaming Liu
    Xiuyou Wang
    Xuehui Bi
    Multimedia Systems, 2024, 30
  • [6] An improved GAN-based approach for image inpainting
    Ngoc-Thao Nguyen
    Bang-Dang Pham
    Thanh-Sang Thai
    Minh-Thanh Nguyen
    2021 RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES (RIVF 2021), 2021, : 174 - 179
  • [7] Image inpainting method based on AU-GAN
    Dong, Chuangchuang
    Liu, Huaming
    Wang, Xiuyou
    Bi, Xuehui
    MULTIMEDIA SYSTEMS, 2024, 30 (02)
  • [8] Image Inpainting Based on Contextual Coherent Attention GAN
    Li, Hong-an
    Hu, Liuqing
    Hua, Qiaozhi
    Yang, Meng
    Li, Xinpeng
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (12)
  • [9] Sem-GAN: Semantically-Consistent Image-to-Image Translation
    Cherian, Anoop
    Sullivan, Alan
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 1797 - 1806
  • [10] Using image smoothing structure information to guide image inpainting
    Zhang J.
    Lian J.
    Liu J.
    Dong Z.
    Zhang H.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2024, 32 (04): : 549 - 564