Edge Sensitive Unsupervised Image-to-Image Translation

被引:0
|
作者
Akkaya, Ibrahim Batuhan [1 ,2 ]
Halici, Ugur [2 ,3 ]
机构
[1] Aselsan Inc, Res Ctr, Ankara, Turkey
[2] Middle East Tech Univ, Dept Elect & Elect Engn, Ankara, Turkey
[3] NOROM Neurosci & Neurotechnol Excellency Ctr, Ankara, Turkey
关键词
Generative adversarial networks; image-to-image translation; domain adaptation; image processing;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The goal of unsupervised image-to-image translation (IIT) is to learn a mapping from the source domain to the target domain without using paired image sets. Most of the current IIT methods apply adversarial training to match the distribution of the translated images to the distribution of the target images. However, this may create artifacts in uniform areas of the source image if two domains have different background distribution. In this work, we propose an unsupervised IIT method that preserves the uniform background information of the source images. The edge information which is calculated by Sobel operator is utilized for reducing the artifacts. The edge-preserving loss function, namely Sobel loss is introduced to achieve this goal which is defined as the L2 norm between the Sobel responses of the original and the translated images. The proposed method is validated on the jellyfish-to-Haeckel dataset. The dataset is prepared to demonstrate the mentioned problem which contains images with different uniform background distributions. Our method obtained a clear performance gain compared to the baseline method, showing the effectiveness of the Sobel loss.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Multimodal Unsupervised Image-to-Image Translation
    Huang, Xun
    Liu, Ming-Yu
    Belongie, Serge
    Kautz, Jan
    COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 179 - 196
  • [2] Unsupervised Image-to-Image Translation: A Review
    Hoyez, Henri
    Schockaert, Cedric
    Rambach, Jason
    Mirbach, Bruno
    Stricker, Didier
    SENSORS, 2022, 22 (21)
  • [3] Unsupervised Image-to-Image Translation Networks
    Liu, Ming-Yu
    Breuel, Thomas
    Kautz, Jan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [4] Unsupervised Image-to-Image Translation with Generative Prior
    Yang, Shuai
    Jiang, Liming
    Liu, Ziwei
    Loy, Chen Change
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18311 - 18320
  • [5] Unsupervised Image-to-Image Translation with Style Consistency
    Lai, Binxin
    Wang, Yuan-Gen
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VI, 2024, 14430 : 322 - 334
  • [6] Rethinking the Truly Unsupervised Image-to-Image Translation
    Baek, Kyungjune
    Choi, Yunjey
    Uh, Youngjung
    Yoo, Jaejun
    Shim, Hyunjung
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14134 - 14143
  • [7] Contrastive learning for unsupervised image-to-image translation
    Lee, Hanbit
    Seol, Jinseok
    Lee, Sang-goo
    Park, Jaehui
    Shim, Junho
    APPLIED SOFT COMPUTING, 2024, 151
  • [8] DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
    Yi, Zili
    Zhang, Hao
    Tan, Ping
    Gong, Minglun
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2868 - 2876
  • [9] Few-Shot Unsupervised Image-to-Image Translation
    Liu, Ming-Yu
    Huang, Xun
    Mallya, Arun
    Karras, Tero
    Aila, Timo
    Lehtinen, Jaakko
    Kautz, Jan
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 10550 - 10559
  • [10] Dual Contrastive Learning for Unsupervised Image-to-Image Translation
    Han, Junlin
    Shoeiby, Mehrdad
    Petersson, Lars
    Armin, Mohammad Ali
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 746 - 755