StyleAutoEncoder for Manipulating Image Attributes Using Pre-trained StyleGAN

被引:0
|
作者
Bedychaj, Andrzej [1 ]
Tabor, Jacek [1 ]
Smieja, Marek [1 ]
机构
[1] Jagiellonian Univ, Krakow, Poland
关键词
D O I
10.1007/978-981-97-2253-2_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep conditional generative models are excellent tools for creating high-quality images and editing their attributes. However, training modern generative models from scratch is very expensive and requires large computational resources. In this paper, we introduce StyleAutoEncoder (StyleAE), a lightweight AutoEncoder module, which works as a plugin for pre-trained generative models and allows for manipulating the requested attributes of images. The proposed method offers a cost-effective solution for training deep generative models with limited computational resources, making it a promising technique for a wide range of applications. We evaluate StyleAE by combining it with StyleGAN, which is currently one of the top generative models. Our experiments demonstrate that StyleAE is at least as effective in manipulating image attributes as the state-of-the-art algorithms based on invertible normalizing flows. However, it is simpler, faster, and gives more freedom in designing neural architecture.
引用
收藏
页码:118 / 130
页数:13
相关论文
共 50 条
  • [1] StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN
    Choi, Jongwoo
    Seo, Kwanggyoon
    Ashtari, Amirsaman
    Noh, Junyong
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 7872 - 7881
  • [2] Unsupervised Image-to-Image Translation via Pre-Trained StyleGAN2 Network
    Huang, Jialu
    Liao, Jing
    Kwong, Sam
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1435 - 1448
  • [3] Underwater Image Enhancement Using Pre-trained Transformer
    Boudiaf, Abderrahmene
    Guo, Yuhang
    Ghimire, Adarsh
    Werghi, Naoufel
    De Masi, Giulia
    Javed, Sajid
    Dias, Jorge
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT III, 2022, 13233 : 480 - 488
  • [4] Pre-Trained Image Processing Transformer
    Chen, Hanting
    Wang, Yunhe
    Guo, Tianyu
    Xu, Chang
    Deng, Yiping
    Liu, Zhenhua
    Ma, Siwei
    Xu, Chunjing
    Xu, Chao
    Gao, Wen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12294 - 12305
  • [5] An Application of pre-Trained CNN for Image Classification
    Abdullah
    Hasan, Mohammad S.
    2017 20TH INTERNATIONAL CONFERENCE OF COMPUTER AND INFORMATION TECHNOLOGY (ICCIT), 2017,
  • [6] Face Inpainting with Pre-trained Image Transformers
    Gonc, Kaan
    Saglam, Baturay
    Kozat, Suleyman S.
    Dibeklioglu, Hamdi
    2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
  • [7] Manipulating Pre-Trained Encoder for Targeted Poisoning Attacks in Contrastive Learning
    Chen, Jian
    Gao, Yuan
    Liu, Gaoyang
    Abdelmoniem, Ahmed M.
    Wang, Chen
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2412 - 2424
  • [8] Are Pre-trained Convolutions Better than Pre-trained Transformers?
    Tay, Yi
    Dehghani, Mostafa
    Gupta, Jai
    Aribandi, Vamsi
    Bahri, Dara
    Qin, Zhen
    Metzler, Donald
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
  • [9] Unlocking Pre-trained Image Backbones for Semantic Image Synthesis
    Berrada, Tariq
    Verbeek, Jakob
    Couprie, Camille
    Alahari, Karteek
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 7840 - 7849
  • [10] USING PRE-TRAINED TEMPORARY HELP
    ZITO, JM
    TRAINING AND DEVELOPMENT JOURNAL, 1968, 22 (09): : 24 - &