Edge Sensitive Unsupervised Image-to-Image Translation

被引:0
|
作者
Akkaya, Ibrahim Batuhan [1 ,2 ]
Halici, Ugur [2 ,3 ]
机构
[1] Aselsan Inc, Res Ctr, Ankara, Turkey
[2] Middle East Tech Univ, Dept Elect & Elect Engn, Ankara, Turkey
[3] NOROM Neurosci & Neurotechnol Excellency Ctr, Ankara, Turkey
关键词
Generative adversarial networks; image-to-image translation; domain adaptation; image processing;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The goal of unsupervised image-to-image translation (IIT) is to learn a mapping from the source domain to the target domain without using paired image sets. Most of the current IIT methods apply adversarial training to match the distribution of the translated images to the distribution of the target images. However, this may create artifacts in uniform areas of the source image if two domains have different background distribution. In this work, we propose an unsupervised IIT method that preserves the uniform background information of the source images. The edge information which is calculated by Sobel operator is utilized for reducing the artifacts. The edge-preserving loss function, namely Sobel loss is introduced to achieve this goal which is defined as the L2 norm between the Sobel responses of the original and the translated images. The proposed method is validated on the jellyfish-to-Haeckel dataset. The dataset is prepared to demonstrate the mentioned problem which contains images with different uniform background distributions. Our method obtained a clear performance gain compared to the baseline method, showing the effectiveness of the Sobel loss.
引用
收藏
页数:4
相关论文
共 50 条
  • [41] Domain Bridge for Unpaired Image-to-Image Translation and Unsupervised Domain Adaptation
    Pizzati, Fabio
    de Charette, Raoul
    Zaccaria, Michela
    Cerri, Pietro
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2979 - 2987
  • [42] Unsupervised image-to-image translation by semantics consistency and self-attention
    ZHANG Zhibin
    XUE Wanli
    FU Guokai
    OptoelectronicsLetters, 2022, 18 (03) : 175 - 180
  • [43] Unsupervised image-to-image translation with multiscale attention generative adversarial network
    Wang, Fasheng
    Zhang, Qing
    Zhao, Qianyi
    Wang, Mengyin
    Sun, Fuming
    APPLIED INTELLIGENCE, 2024, 54 (08) : 6558 - 6578
  • [44] CR-UNIT: UNSUPERVISED IMAGE-TO-IMAGE TRANSLATION WITH CONTENT RECONSTRUCTION
    Shi, Xiao-Wen
    Wang, Yuan-Gen
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1165 - 1169
  • [45] Attention-Guided Generative Adversarial Networks for Unsupervised Image-to-Image Translation
    Tang, Hao
    Xu, Dan
    Sebel, Nicu
    Yan, Yan
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [46] Unsupervised Image-to-Image Translation with Stacked Cycle-Consistent Adversarial Networks
    Li, Minjun
    Huang, Haozhi
    Ma, Lin
    Liu, Wei
    Zhang, Tong
    Jiang, Yugang
    COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 : 186 - 201
  • [47] Conditional Image-to-Image translation
    Lin, Jianxin
    Xia, Yingce
    Qin, Tao
    Chen, Zhibo
    Liu, Tie-Yan
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5524 - 5532
  • [48] AFF-UNIT: Adaptive feature fusion for unsupervised image-to-image translation
    Li, Yuqiang
    Meng, Haochen
    Lin, Hong
    Liu, Chun
    IET IMAGE PROCESSING, 2021, 15 (13) : 3172 - 3188
  • [49] MULTI-DOMAIN UNSUPERVISED IMAGE-TO-IMAGE TRANSLATION WITH APPEARANCE ADAPTIVE CONVOLUTION
    Jeong, Somi
    Lee, Jiyoung
    Sohn, Kwanghoon
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1750 - 1754
  • [50] Unsupervised image-to-image translation using intra-domain reconstruction loss
    Yuan Fan
    Mingwen Shao
    Wangmeng Zuo
    Qingyun Li
    International Journal of Machine Learning and Cybernetics, 2020, 11 : 2077 - 2088