SAC-GAN: Structure-Aware Image Composition

被引:2
|
作者
Zhou, Hang [1 ]
Ma, Rui [2 ,3 ]
Zhang, Ling-Xiao [4 ]
Gao, Lin [4 ]
Mahdavi-Amiri, Ali [1 ]
Zhang, Hao [1 ]
机构
[1] Simon Fraser Univ, Sch Comp Sci, Burnaby, BC V5A 1S6, Canada
[2] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[3] Minist Educ, Engn Res Ctr Knowledge Driven Human Machine Intell, Changchun 130012, Peoples R China
[4] Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China
基金
加拿大自然科学与工程研究理事会;
关键词
Layout; Transforms; Semantics; Three-dimensional displays; Image edge detection; Codes; Coherence; Structure-aware image composition; self-supervision; GANs; VISION;
D O I
10.1109/TVCG.2022.3226689
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We introduce an end-to-end learning framework for image-to-image composition, aiming to plausibly compose an object represented as a cropped patch from an object image into a background scene image. As our approach emphasizes more on semantic and structural coherence of the composed images, rather than their pixel-level RGB accuracies, we tailor the input and output of our network with structure-aware features and design our network losses accordingly, with ground truth established in a self-supervised setting through the object cropping. Specifically, our network takes the semantic layout features from the input scene image, features encoded from the edges and silhouette in the input object patch, as well as a latent code as inputs, and generates a 2D spatial affine transform defining the translation and scaling of the object patch. The learned parameters are further fed into a differentiable spatial transformer network to transform the object patch into the target image, where our model is trained adversarially using an affine transform discriminator and a layout discriminator. We evaluate our network, coined SAC-GAN, for various image composition scenarios in terms of quality, composability, and generalizability of the composite images. Comparisons are made to state-of-the-art alternatives, including Instance Insertion, ST-GAN, CompGAN and PlaceNet, confirming superiority of our method.
引用
收藏
页码:3151 / 3165
页数:15
相关论文
共 50 条
  • [1] SAC-GAN: Face Image Inpainting with Spatial-Aware Attribute Controllable GAN
    Cha, Dongmin
    Kim, Taehun
    Lee, Joonyeong
    Kim, Dajin
    COMPUTER VISION - ACCV 2022, PT VII, 2023, 13847 : 202 - 218
  • [2] Structure-aware image fusion
    Li, Wen
    Xie, Yuange
    Zhou, Haole
    Han, Ying
    Zhan, Kun
    OPTIK, 2018, 172 : 1 - 11
  • [3] Image compression with structure-aware inpainting
    Wang, Chen
    Sun, Xiaoyan
    Wu, Feng
    Xiong, Hongkai
    2006 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-11, PROCEEDINGS, 2006, : 1816 - 1819
  • [4] Structure-Aware Image Expansion with Global Attention
    Guo, Dewen
    Feng, Jie
    Zhou, Bingfeng
    SA'19: SIGGRAPH ASIA 2019 TECHNICAL BRIEFS, 2019, : 13 - 16
  • [5] Structure-Aware Image Segmentation with Homotopy Warping
    Hu, Xiaoling
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] A Structure-aware Despeckling Method of SAR Image
    Jin, Xin
    Wang, Xiaotong
    Xu, Xiaogang
    Yi, Chengtao
    FOURTH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND INFORMATION SCIENCES (CCAIS 2015), 2015, : 257 - 261
  • [7] Structure-Aware Image Resizing for Chinese Characters
    Liu, Chengdong
    Lian, Zhouhui
    Tang, Yingmin
    Xiao, Jianguo
    MULTIMEDIA MODELING (MMM 2017), PT I, 2017, 10132 : 379 - 390
  • [8] Structure-aware Loss Function for Ultrasound Image Segmentation
    Fu, Yixuan
    Chen, Junying
    Li, Kai
    INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS 2021), 2021,
  • [9] Domain-based structure-aware image inpainting
    Wei, Yinwei
    Liu, Shiguang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2016, 10 (05) : 911 - 919
  • [10] Structure-Aware Multikernel Learning for Hyperspectral Image Classification
    Zhou, Chengle
    Tu, Bing
    Li, Nanying
    He, Wei
    Plaza, Antonio
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 9837 - 9854