Multi-Label Conditional Generation From Pre-Trained Models

被引:0
|
作者
Proszewska, Magdalena [1 ]
Wolczyk, Maciej [1 ]
Zieba, Maciej [2 ,3 ]
Wielopolski, Patryk [4 ]
Maziarka, Lukasz [1 ]
Smieja, Marek [1 ]
机构
[1] Jagiellonian Univ, Fac Math & Comp Sci, PL-31007 Krakow, Poland
[2] Tooploox, PL-53601 Wroclaw, Poland
[3] Wroclaw Univ Sci & Technol, PL-53601 Wroclaw, Poland
[4] Wroclaw Univ Sci & Technol, PL-50370 Wroclaw, Poland
关键词
Training; Computational modeling; Adaptation models; Vectors; Data models; Aerospace electronics; Three-dimensional displays; Conditional generation; deep generative models; GANs; invertible normalizing flows; pre-trained models; VAEs;
D O I
10.1109/TPAMI.2024.3382008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although modern generative models achieve excellent quality in a variety of tasks, they often lack the essential ability to generate examples with requested properties, such as the age of the person in the photo or the weight of the generated molecule. To overcome these limitations we propose PluGeN (Plugin Generative Network), a simple yet effective generative technique that can be used as a plugin for pre-trained generative models. The idea behind our approach is to transform the entangled latent representation using a flow-based module into a multi-dimensional space where the values of each attribute are modeled as an independent one-dimensional distribution. In consequence, PluGeN can generate new samples with desired attributes as well as manipulate labeled attributes of existing examples. Due to the disentangling of the latent representation, we are even able to generate samples with rare or unseen combinations of attributes in the dataset, such as a young person with gray hair, men with make-up, or women with beards. In contrast to competitive approaches, PluGeN can be trained on partially labeled data. We combined PluGeN with GAN and VAE models and applied it to conditional generation and manipulation of images, chemical molecule modeling and 3D point clouds generation.
引用
收藏
页码:6185 / 6198
页数:14
相关论文
共 50 条
  • [1] PluGeN: Multi-Label Conditional Generation from Pre-trained Models
    Wolczyk, Maciej
    Proszewska, Magdalena
    Maziarka, Lukasz
    Zieba, Maciej
    Wielopolski, Patryk
    Kurczab, Rafal
    Smieja, Marek
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8647 - 8656
  • [2] EmoBART: Multi-label Emotion Classification Method Based on Pre-trained Label Sequence Generation Model
    Chen, Sufen
    Chen, Lei
    Zeng, Xuegiang
    NEURAL COMPUTING FOR ADVANCED APPLICATIONS, NCAA 2024, PT III, 2025, 2183 : 104 - 115
  • [3] Ensembling Multilingual Pre-Trained Models for Predicting Multi-Label Regression Emotion Share from Speech
    Atmaja, Bagus Tris
    Sasou, Akira
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1026 - 1029
  • [4] Single-label and multi-label conceptor classifiers in pre-trained neural networks
    Qian, Guangwu
    Zhang, Lei
    Wang, Yan
    NEURAL COMPUTING & APPLICATIONS, 2019, 31 (10): : 6179 - 6188
  • [5] Single-label and multi-label conceptor classifiers in pre-trained neural networks
    Guangwu Qian
    Lei Zhang
    Yan Wang
    Neural Computing and Applications, 2019, 31 : 6179 - 6188
  • [6] Leveraging Pre-Trained Extreme Multi-Label Classifiers for Zero-Shot Learning
    Ostapuk, Natalia
    Dolamic, Ljiljana
    Mermoud, Alain
    Cudre-Mauroux, Philippe
    2024 11TH IEEE SWISS CONFERENCE ON DATA SCIENCE, SDS 2024, 2024, : 233 - 236
  • [7] Transfer learning with pre-trained conditional generative models
    Yamaguchi, Shin'ya
    Kanai, Sekitoshi
    Kumagai, Atsutoshi
    Chijiwa, Daiki
    Kashima, Hisashi
    MACHINE LEARNING, 2025, 114 (04)
  • [8] Pseudo-Prompt Generating in Pre-trained Vision-Language Models for Multi-label Medical Image Classification
    Ye, Yaoqin
    Zhang, Junjie
    Shi, Hongwei
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XIV, 2025, 15044 : 279 - 298
  • [9] Research on cross-lingual multi-label patent classification based on pre-trained model
    Lu, Yonghe
    Chen, Lehua
    Tong, Xinyu
    Peng, Yongxin
    Zhu, Hou
    SCIENTOMETRICS, 2024, 129 (06) : 3067 - 3087
  • [10] Conditional pre-trained attention based Chinese question generation
    Zhang, Liang
    Fang, Ligang
    Fan, Zheng
    Li, Wei
    An, Jing
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (20):