MONTRAGE: Monitoring Training for Attribution of Generative Diffusion Models

被引:0
|
作者
Brokman, Jonathan [1 ,2 ]
Hofman, Omer [1 ]
Vainshtein, Roman [1 ]
Giloni, Amit [1 ,3 ]
Shimizu, Toshiya [4 ]
Rachmill, Oren [1 ]
Zolff, Alon [1 ]
Shabtar, Asaf [3 ]
Unno, Yuki [4 ]
Kojima, Hisashi [4 ]
机构
[1] Fujitsu Res Europe, Slough, Berks, England
[2] Technion Israel Inst Technol, Haifa, Israel
[3] Ben Gurion Univ Negev, Beer Sheva, Israel
[4] Fujitsu Ltd, Tokyo, Japan
来源
关键词
Data Attribution; Diffusion Models; Model Customization;
D O I
10.1007/978-3-031-73226-3_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Diffusion models, which revolutionized image generation, are facing challenges related to intellectual property. These challenges arise when a generated image is influenced by copyrighted images from the training data, a plausible scenario in internet-collected data. Hence, pin-pointing influential images from the training dataset, a task known as data attribution, becomes crucial for transparency of content origins. We introduce MONTRAGE, a pioneering data attribution method. Unlike existing approaches that analyze the model post-training, MONTRAGE integrates a novel technique to monitor generations throughout the training via internal model representations. It is tailored for customized diffusion models, where training dynamics access is a practical assumption. This approach, coupled with a new loss function, enhances performance while maintaining efficiency. The advantage of MONTRAGE is evaluated in two granularity-levels: Between-concepts and within-concept, outperforming current state-of-the-art methods for high accuracy. This substantiates MONTRAGE's insights on diffusion models and its contribution towards copyright solutions for AI digital-art.
引用
收藏
页码:1 / 17
页数:17
相关论文
共 50 条
  • [1] Artificial Fingerprinting for Generative Models: Rooting Deepfake Attribution in Training Data
    Yu, Ning
    Skripniuk, Vladislav
    Abdelnabi, Sahar
    Fritz, Mario
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14428 - 14437
  • [2] Diffusion Models in Generative AI
    Sazara, Cem
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9705 - 9706
  • [3] A Survey on Generative Diffusion Models
    Cao, Hanqun
    Tan, Cheng
    Gao, Zhangyang
    Xu, Yilun
    Chen, Guangyong
    Heng, Pheng-Ann
    Li, Stan Z.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 2814 - 2830
  • [4] Diffusion Models for Generative Histopathology
    Sridhar, Niranjan
    Elad, Michael
    McNeil, Carson
    Rivlin, Ehud
    Freedman, Daniel
    DEEP GENERATIVE MODELS, DGM4MICCAI 2023, 2024, 14533 : 154 - 163
  • [5] Subspace Diffusion Generative Models
    Jing, Bowen
    Corso, Gabriele
    Berlinghieri, Renato
    Jaakkola, Tommi
    COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 : 274 - 289
  • [6] Diffusion Models for Generative Outfit Recommendation
    Xu, Yiyan
    Wang, Wenjie
    Feng, Fuli
    Ma, Yunshan
    Zhang, Jizhi
    He, Xiangnan
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1350 - 1359
  • [7] Speech Enhancement with Generative Diffusion Models
    O. V. Girfanov
    A. G. Shishkin
    Automatic Documentation and Mathematical Linguistics, 2023, 57 : 249 - 257
  • [8] Generative Diffusion Models: Principles and Applications
    Tanaka, Akinori
    JOURNAL OF THE PHYSICAL SOCIETY OF JAPAN, 2025, 94 (03)
  • [9] Nonequilbrium physics of generative diffusion models
    Yu, Zhendong
    Huang, Haiping
    PHYSICAL REVIEW E, 2025, 111 (01)
  • [10] Portrait Reification with Generative Diffusion Models
    Asperti, Andrea
    Colasuonno, Gabriele
    Guerra, Antonio
    APPLIED SCIENCES-BASEL, 2023, 13 (11):