Stabilizing Adversarial Invariance Induction from Divergence Minimization Perspective

被引:0
|
作者
Iwasawa, Yusuke [1 ]
Akuzawa, Kei [1 ]
Matsuo, Yutaka [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
来源
PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2020年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial invariance induction (AII) is a generic and powerful framework for enforcing an invariance to nuisance attributes into neural network representations. However, its optimization is often unstable and little is known about its practical behavior. This paper presents an analysis of the reasons for the optimization difficulties and provides a better optimization procedure by rethinking AII from a divergence minimization perspective. Interestingly, this perspective indicates a cause of the optimization difficulties: it does not ensure proper divergence minimization, which is a requirement of the invariant representations. We then propose a simple variant of AII, called invariance induction by discriminator matching, which takes into account the divergence minimization interpretation of the invariant representations. Our method consistently achieves near-optimal invariance in toy datasets with various configurations in which the original AII is catastrophically unstable. Extensive experiments on four real-world datasets also support the superior performance of the proposed method, leading to improved user anonymization and domain generalization.
引用
收藏
页码:1955 / 1962
页数:8
相关论文
共 50 条
  • [1] Adversarial α-divergence minimization for Bayesian approximate inference
    Rodriguez-Santana, Simon
    Hernandez-Lobato, Daniel
    NEUROCOMPUTING, 2022, 471 : 260 - 274
  • [2] Adversarial Multiclass Classification: A Risk Minimization Perspective
    Fathony, Rizal
    Liu, Anqi
    Asif, Kaiser
    Ziebart, Brian D.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [3] A Divergence Minimization Perspective on Imitation Learning Methods
    Ghasemipour, Seyed Kamyar Seyed
    Zemel, Richard
    Gu, Shixiang
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [4] A New Perspective on Stabilizing GANs Training: Direct Adversarial Training
    Ansari, Mohd Shadab
    Rath, Ibhan Chand
    Patro, Siba Kumar
    Shukla, Anshuman
    Bahirat, Himanshu J.
    IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2023, 59 (01) : 1077 - 1089
  • [5] A New Perspective on Stabilizing GANs Training: Direct Adversarial Training
    Li, Ziqiang
    Xia, Pengfei
    Tao, Rentuo
    Niu, Hongjing
    Li, Bin
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (01): : 178 - 189
  • [6] INVARIANCE FROM THE EUCLIDEAN GEOMETERS PERSPECTIVE
    VANGOOL, LJ
    MOONS, T
    PAUWELS, E
    WAGEMANS, J
    PERCEPTION, 1994, 23 (05) : 547 - 561
  • [7] ACE: Explaining cluster from an adversarial perspective
    Lu, Yang Young
    Yu, Timothy C.
    Bonora, Giancarlo
    Noble, William Stafford
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] Alpha-divergence minimization with mixed variational posterior for Bayesian neural networks and its robustness against adversarial examples
    Liu, Xiao
    Sun, Shiliang
    NEUROCOMPUTING, 2021, 423 : 427 - 434
  • [9] From the Perspective of CNN to Adversarial Iris Images
    Huang, Yi
    Kong, Adams Wai Kin
    Lam, Kwok-Yan
    2018 IEEE 9TH INTERNATIONAL CONFERENCE ON BIOMETRICS THEORY, APPLICATIONS AND SYSTEMS (BTAS), 2018,
  • [10] Inflation and conformal invariance: the perspective from radial quantization
    Kehagias, Alex
    Riotto, Antonio
    FORTSCHRITTE DER PHYSIK-PROGRESS OF PHYSICS, 2017, 65 (05):