RIATIG: Reliable and Imperceptible Adversarial Text-to-Image Generation with Natural Prompts

被引:9
|
作者
Liu, Han [1 ]
Wu, Yuhao [1 ]
Zhai, Shixuan [1 ]
Yuan, Bo [2 ]
Zhang, Ning [1 ]
机构
[1] Washington Univ, St Louis, MO 63110 USA
[2] Rutgers State Univ, Piscataway, NJ USA
关键词
D O I
10.1109/CVPR52729.2023.01972
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The field of text-to-image generation has made remarkable strides in creating high-fidelity and photorealistic images. As this technology gains popularity, there is a growing concern about its potential security risks. However, there has been limited exploration into the robustness of these models from an adversarial perspective. Existing research has primarily focused on untargeted settings, and lacks holistic consideration for reliability (attack success rate) and stealthiness (imperceptibility). In this paper, we propose RIATIG, a reliable and imperceptible adversarial attack against text-to-image models via inconspicuous examples. By formulating the example crafting as an optimization process and solving it using a genetic-based method, our proposed attack can generate imperceptible prompts for text-to-image generation models in a reliable way. Evaluation of six popular text-to-image generation models demonstrates the efficiency and stealthiness of our attack in both white-box and black-box settings. To allow the community to build on top of our findings, we've made the artifacts available(1).
引用
收藏
页码:20585 / 20594
页数:10
相关论文
共 50 条
  • [41] A survey of generative adversarial networks and their application in text-to-image synthesis
    Zeng, Wu
    Zhu, Heng-liang
    Lin, Chuan
    Xiao, Zheng-ying
    ELECTRONIC RESEARCH ARCHIVE, 2023, 31 (12): : 7142 - 7181
  • [42] Zero-Shot Text-to-Image Generation
    Ramesh, Aditya
    Pavlov, Mikhail
    Goh, Gabriel
    Gray, Scott
    Voss, Chelsea
    Radford, Alec
    Chen, Mark
    Sutskever, Ilya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [43] Dense Text-to-Image Generation with Attention Modulation
    Kim, Yunji
    Lee, Jiyoung
    Kim, Jin-Hwa
    Ha, Jung-Woo
    Zhu, Jun-Yan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 7667 - 7677
  • [44] Visual Programming for Text-to-Image Generation and Evaluation
    Cho, Jaemin
    Zala, Abhay
    Bansal, Mohit
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Adversarial attacks and defenses on text-to-image diffusion models: A survey
    Zhang, Chenyu
    Hu, Mingwang
    Li, Wenhui
    Wang, Lanjun
    INFORMATION FUSION, 2025, 114
  • [46] MirrorGAN: Learning Text-to-image Generation by Redescription
    Qiao, Tingting
    Zhang, Jing
    Xu, Duanqing
    Tao, Dacheng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1505 - 1514
  • [47] TextControlGAN: Text-to-Image Synthesis with Controllable Generative Adversarial Networks
    Ku, Hyeeun
    Lee, Minhyeok
    APPLIED SCIENCES-BASEL, 2023, 13 (08):
  • [48] StyleDrop: Text-to-Image Generation in Any Style
    Sohn, Kihyuk
    Ruiz, Nataniel
    Lee, Kimin
    Chin, Daniel Castro
    Blok, Irina
    Chang, Huiwen
    Barber, Jarred
    Jiang, Lu
    Entis, Glenn
    Li, Yuanzhen
    Hao, Yuan
    Essa, Irfan
    Rubinstein, Michael
    Krishnan, Dilip
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [49] Semantics-enhanced Adversarial Nets for Text-to-Image Synthesis
    Tan, Hongchen
    Liu, Xiuping
    Li, Xin
    Zhang, Yi
    Yin, Baocai
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 10500 - 10509
  • [50] Survey About Generative Adversarial Network and Text-to-Image Synthesis
    Lai, Lina
    Mi, Yu
    Zhou, Longlong
    Rao, Jiyong
    Xu, Tianyang
    Song, Xiaoning
    Computer Engineering and Applications, 2023, 59 (19): : 21 - 39