Edge Sensitive Unsupervised Image-to-Image Translation

被引:0
|
作者
Akkaya, Ibrahim Batuhan [1 ,2 ]
Halici, Ugur [2 ,3 ]
机构
[1] Aselsan Inc, Res Ctr, Ankara, Turkey
[2] Middle East Tech Univ, Dept Elect & Elect Engn, Ankara, Turkey
[3] NOROM Neurosci & Neurotechnol Excellency Ctr, Ankara, Turkey
关键词
Generative adversarial networks; image-to-image translation; domain adaptation; image processing;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The goal of unsupervised image-to-image translation (IIT) is to learn a mapping from the source domain to the target domain without using paired image sets. Most of the current IIT methods apply adversarial training to match the distribution of the translated images to the distribution of the target images. However, this may create artifacts in uniform areas of the source image if two domains have different background distribution. In this work, we propose an unsupervised IIT method that preserves the uniform background information of the source images. The edge information which is calculated by Sobel operator is utilized for reducing the artifacts. The edge-preserving loss function, namely Sobel loss is introduced to achieve this goal which is defined as the L2 norm between the Sobel responses of the original and the translated images. The proposed method is validated on the jellyfish-to-Haeckel dataset. The dataset is prepared to demonstrate the mentioned problem which contains images with different uniform background distributions. Our method obtained a clear performance gain compared to the baseline method, showing the effectiveness of the Sobel loss.
引用
收藏
页数:4
相关论文
共 50 条
  • [31] UNSUPERVISED IMAGE-TO-IMAGE TRANSLATION VIA FAIR REPRESENTATI ON OF GENDER BIAS
    Hwang, Sunhee
    Byun, Hyeran
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1953 - 1957
  • [32] SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation
    Shao, Xuning
    Zhang, Weidong
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6526 - 6535
  • [33] Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation
    Gomez, Raul
    Liu, Yahui
    De Nadai, Marco
    Karatzas, Dimosthenis
    Lepri, Bruno
    Sebe, Nicu
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 3164 - 3172
  • [34] Unsupervised image-to-image translation by semantics consistency and self-attention
    Zhang Zhibin
    Xue Wanli
    Fu Guokai
    OPTOELECTRONICS LETTERS, 2022, 18 (03) : 175 - 180
  • [35] Multimodal Unsupervised Image-to-Image Translation Without Independent Style Encoder
    Sun, Yanbei
    Lu, Yao
    Lu, Haowei
    Zhao, Qingjie
    Wang, Shunzhou
    MULTIMEDIA MODELING (MMM 2022), PT I, 2022, 13141 : 624 - 636
  • [36] Multi-Constraint Adversarial Networks for Unsupervised Image-to-Image Translation
    Saxena, Divya
    Kulshrestha, Tarun
    Cao, Jiannong
    Cheung, Shing-Chi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1601 - 1612
  • [37] Unsupervised image-to-image translation by semantics consistency and self-attention
    Zhibin Zhang
    Wanli Xue
    Guokai Fu
    Optoelectronics Letters, 2022, 18 : 175 - 180
  • [38] Allowing Supervision in Unsupervised Deformable- Instances Image-to-Image Translation
    Liu, Yu
    Su, Sitong
    Zhu, Junchen
    Zheng, Feng
    Gao, Lianli
    Song, Jingkuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5335 - 5349
  • [39] Unsupervised Generative Adversarial Network for Plantar Pressure Image-to-Image Translation
    Ahmadian, Mona
    Beheshti, Mohammad T. H.
    Kalhor, Ahmad
    Shirian, Amir
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 2580 - 2583
  • [40] Unsupervised Multimodal Image-to-Image Translation: Generate What You Want
    Zhang, Chao
    Xi, Wei
    Liu, Xinhui
    Bai, Gairui
    Sun, Jingtong
    Yu, Fan
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,