Universal Fingerprint Generation: Controllable Diffusion Model With Multimodal Conditions

被引:0
|
作者
Grosz, Steven A. [1 ]
Jain, Anil K. [1 ]
机构
[1] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
关键词
Fingerprint recognition; Training; Diffusion models; Data models; Standards; Optical imaging; Image synthesis; Computational modeling; Vectors; Pipelines; Artificial fingerprint generation; denoising diffusion probabilistic models; latent diffusion models; synthetic fingerprints; zero-shot image generation; MULTISENSOR;
D O I
10.1109/TPAMI.2024.3486179
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The utilization of synthetic data for fingerprint recognition has garnered increased attention due to its potential to alleviate privacy concerns surrounding sensitive biometric data. However, current methods for generating fingerprints have limitations in creating impressions of the same finger with useful intra-class variations. To tackle this challenge, we present GenPrint, a framework to produce fingerprint images of various types while maintaining identity and offering humanly understandable control over different appearance factors, such as fingerprint class, acquisition type, sensor device, and quality level. Unlike previous fingerprint generation approaches, GenPrint is not confined to replicating style characteristics from the training dataset alone: it enables the generation of novel styles from unseen devices without requiring additional fine-tuning. To accomplish these objectives, we developed GenPrint using latent diffusion models with multimodal conditions (text and image) for consistent generation of style and identity. Our experiments leverage a variety of publicly available datasets for training and evaluation. Results demonstrate the benefits of GenPrint in terms of identity preservation, explainable control, and universality of generated images. Importantly, the GenPrint-generated images yield comparable or even superior accuracy to models trained solely on real data and further enhances performance when augmenting the diversity of existing real fingerprint datasets.
引用
收藏
页码:1028 / 1041
页数:14
相关论文
共 50 条
  • [1] CONDITIONS OF MULTIMODAL GENERATION AS INFLUENCED BY DIFFUSION OF EXCITATION
    LIVSHITS, BL
    STOLYAROV, SN
    TSIKUNOV, VN
    DOKLADY AKADEMII NAUK SSSR, 1966, 168 (01): : 72 - +
  • [2] LayoutDM: Discrete Diffusion Model for Controllable Layout Generation
    Inoue, Naoto
    Kikuchi, Kotaro
    Simo-Serra, Edgar
    Otani, Mayu
    Yamaguchi, Kota
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10167 - 10176
  • [3] A Survey of Multimodal Controllable Diffusion Models
    Jiang, Rui
    Zheng, Guang-Cong
    Li, Teng
    Yang, Tian-Rui
    Wang, Jing-Dong
    Li, Xi
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2024, 39 (03) : 509 - 541
  • [4] LayoutDiffusion: Controllable Diffusion Model for Layout-to-image Generation
    Zheng, Guangcong
    Zhou, Xianpan
    Li, Xuewei
    Qi, Zhongang
    Shan, Ying
    Li, Xi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22490 - 22499
  • [5] UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild
    Qin, Can
    Zhang, Shu
    Yu, Ning
    Feng, Yihao
    Yang, Xinyi
    Zhou, Yingbo
    Wang, Huan
    Niebles, Juan Carlos
    Xiong, Caiming
    Savarese, Silvio
    Ermon, Stefano
    Fu, Yun
    Xu, Ran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] MoonShot: Towards Controllable Video Generation and Editing with Motion-Aware Multimodal Conditions
    Zhang, David Junhao
    Li, Dongxu
    Le, Hung
    Shou, Mike Zheng
    Xiong, Caiming
    Sahoo, Doyen
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025,
  • [7] HeartBeat: Towards Controllable Echocardiography Video Synthesis with Multimodal Conditions-Guided Diffusion Models
    Zhou, Xinrui
    Huang, Yuhao
    Xue, Wufeng
    Dou, Haoran
    Cheng, Jun
    Zhou, Han
    Ni, Dong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT VII, 2024, 15007 : 361 - 371
  • [8] Relation-Aware Diffusion Model for Controllable Poster Layout Generation
    Li, Fengheng
    Liu, An
    Feng, Wei
    Zhu, Honghe
    Li, Yaoyu
    Zhang, Zheng
    Lv, Jingjing
    Zhu, Xin
    Shen, Junjie
    Lin, Zhangang
    Shao, Jingping
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1249 - 1258
  • [9] Scenario Diffusion: Controllable Driving Scenario Generation With Diffusion
    Pronovost, Ethan
    Ganesina, Meghana Reddy
    Hendy, Noureldin
    Wang, Zeyu
    Morales, Andres
    Wang, Kai
    Roy, Nicholas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] CCLAP: CONTROLLABLE CHINESE LANDSCAPE PAINTING GENERATION VIA LATENT DIFFUSION MODEL
    Wang, Zhongqi
    Zhang, Jie
    Ji, Zhilong
    Bai, Jinfeng
    Shan, Shiguang
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2117 - 2122