Indirect Adversarial Losses via an Intermediate Distribution for Training GANs

被引:0
|
作者
Yang, Rui [1 ]
Duc Minh Vo [1 ]
Nakayama, Hideki [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
关键词
D O I
10.1109/WACV56688.2023.00463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we consider the weak convergence characteristics of the Integral Probability Metrics (IPM) methods in training Generative Adversarial Networks (GANs). We first concentrate on a successful IPM-based GAN method that employs a repulsive version of the Maximum Mean Discrepancy (MMD) as the discriminator loss (called repulsive MMD-GAN). We reinterpret its repulsive metrics as an indirect discriminator loss function toward an intermediate distribution. This allows us to propose a novel generator loss via such an intermediate distribution based on our reinterpretation. Our indirect adversarial losses use a simple known distribution (i.e., the Normal or Uniform distribution in our experiments) to simulate indirect adversarial learning between three parts - real, fake, and intermediate distributions. Furthermore, we found the Kernelized Stein Discrepancy (KSD) from the IPM family as the adversarial loss function to avoid randomness from intermediate distribution samples because the target side (intermediate one) is sample-free in KSD. Experiments on several real-world datasets show that our methods can successfully train GANs with the intermediate-distribution-based KSD and MMD and can outperform previous loss metrics.
引用
收藏
页码:4641 / 4650
页数:10
相关论文
共 50 条
  • [1] CAGAN: Consistent Adversarial Training Enhanced GANs
    Ni, Yao
    Song, Dandan
    Zhang, Xi
    Wu, Hao
    Liao, Lejian
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2588 - 2594
  • [2] Assessing adversarial training effect on IDSs and GANs
    Chaitou, Hassan
    Robert, Thomas
    Leneutre, Jean
    Pautet, Laurent
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 543 - 550
  • [3] A New Perspective on Stabilizing GANs Training: Direct Adversarial Training
    Li, Ziqiang
    Xia, Pengfei
    Tao, Rentuo
    Niu, Hongjing
    Li, Bin
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (01): : 178 - 189
  • [4] A New Perspective on Stabilizing GANs Training: Direct Adversarial Training
    Ansari, Mohd Shadab
    Rath, Ibhan Chand
    Patro, Siba Kumar
    Shukla, Anshuman
    Bahirat, Himanshu J.
    IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2023, 59 (01) : 1077 - 1089
  • [5] Joint Distribution Adaptation via Wasserstein Adversarial Training
    Wang, Xiaolu
    Zhang, Wenyong
    Shen, Xin
    Liu, Huikang
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Boosting Adversarial Training with Learnable Distribution
    Chen, Kai
    Wang, Jinwei
    Adeke, James Msughter
    Liu, Guangjie
    Dai, Yuewei
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 78 (03): : 3247 - 3265
  • [7] Reliably fast adversarial training via latent adversarial perturbation
    Park, Geon Yeong
    Lee, Sang Wan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7738 - 7747
  • [8] Boosting adversarial robustness via self-paced adversarial training
    He, Lirong
    Ai, Qingzhong
    Yang, Xincheng
    Ren, Yazhou
    Wang, Qifan
    Xu, Zenglin
    NEURAL NETWORKS, 2023, 167 : 706 - 714
  • [9] Enhancing Adversarial Robustness via Anomaly-aware Adversarial Training
    Tang, Keke
    Lou, Tianrui
    He, Xu
    Shi, Yawen
    Zhu, Peican
    Gu, Zhaoquan
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2023, 2023, 14117 : 328 - 342
  • [10] Adversarial training with distribution normalization and margin balance
    Cheng, Zhen
    Zhu, Fei
    Zhang, Xu-Yao
    Liu, Cheng-Lin
    PATTERN RECOGNITION, 2023, 136