GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures

被引:0
|
作者
Gruber, A. [1 ,2 ]
Collins, E. [2 ]
Meka, A. [2 ]
Mueller, F. [2 ]
Sarkar, K. [2 ]
Orts-Escolano, S. [2 ]
Prasso, L. [2 ]
Busch, J. [2 ]
Gross, M. [1 ]
Beeler, T. [2 ]
机构
[1] ETH, Zurich, Switzerland
[2] Google, Menlo Pk, CA USA
关键词
<bold>CCS Concepts</bold>; center dot <bold>Computing methodologies</bold> -> <bold>Machine learning</bold>; <bold>Texturing</bold>;
D O I
10.1111/cgf.15039
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
High-resolution texture maps are essential to render photoreal digital humans for visual effects or to generate data for machine learning. The acquisition of high resolution assets at scale is cumbersome, it involves enrolling a large number of human subjects, using expensive multi-view camera setups, and significant manual artistic effort to align the textures. To alleviate these problems, we introduce GANtlitz (A play on the german noun Antlitz, meaning face), a generative model that can synthesize multi-modal ultra-high-resolution face appearance maps for novel identities. Our method solves three distinct challenges: 1) unavailability of a very large data corpus generally required for training generative models, 2) memory and computational limitations of training a GAN at ultra-high resolutions, and 3) consistency of appearance features such as skin color, pores and wrinkles in high-resolution textures across different modalities. We introduce dual-style blocks, an extension to the style blocks of the StyleGAN2 architecture, which improve multi-modal synthesis. Our patch-based architecture is trained only on image patches obtained from a small set of face textures (<100) and yet allows us to generate seamless appearance maps of novel identities at 6k x 4k resolution. Extensive qualitative and quantitative evaluations and baseline comparisons show the efficacy of our proposed system. (see )
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Multi-modal Spectral Image Super-Resolution
    Lahoud, Fayez
    Zhou, Ruofan
    Susstrunk, Sabine
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 : 35 - 50
  • [42] GMN: Generative Multi-modal Network for Practical Document Information Extraction
    Cao, Haoyu
    Ma, Jiefeng
    Guo, Antai
    Hu, Yiqing
    Liu, Hao
    Jiang, Deqiang
    Liu, Yinsong
    Ren, Bo
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3768 - 3778
  • [43] Multi-modal perception for soft robotic interactions using generative models
    Donato, Enrico
    Falotico, Egidio
    Thuruthel, Thomas George
    2024 IEEE 7TH INTERNATIONAL CONFERENCE ON SOFT ROBOTICS, ROBOSOFT, 2024, : 311 - 318
  • [44] Multi-modal generative adversarial network for zero-shot learning
    Ji, Zhong
    Chen, Kexin
    Wang, Junyue
    Yu, Yunlong
    Zhang, Zhongfei
    KNOWLEDGE-BASED SYSTEMS, 2020, 197
  • [45] Depth Map Upsampling via Multi-Modal Generative Adversarial Network
    Tan, Daniel Stanley
    Lin, Jun-Ming
    Lai, Yu-Chi
    Ilao, Joel
    Hua, Kai-Lung
    SENSORS, 2019, 19 (07)
  • [46] Modelling flight trajectories with multi-modal generative adversarial imitation learning
    Spatharis, Christos
    Blekas, Konstantinos
    Vouros, George A.
    APPLIED INTELLIGENCE, 2024, : 7118 - 7134
  • [47] A Multi-modal Authentication Method Based on Human Face and Palmprint
    Lu, Yinghua
    Fu, Yao
    Li, Jinsong
    Li, Xiaolu
    Kong, Jun
    FGCN: PROCEEDINGS OF THE 2008 SECOND INTERNATIONAL CONFERENCE ON FUTURE GENERATION COMMUNICATION AND NETWORKING, VOLS 1 AND 2, 2008, : 689 - 692
  • [48] PANORAMIC FACE AND EAR IMAGE STITCHING IN MULTI-MODAL RECOGNITION
    Li, Fang-Shi
    Mu, Zhi-Chun
    Chen, Long
    2014 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION (ICWAPR), 2014, : 81 - 86
  • [49] Fusion of infrared and range data: Multi-modal face images
    Chen, X
    Flynn, PJ
    Bowyer, KW
    ADVANCES IN BIOMETRICS, PROCEEDINGS, 2006, 3832 : 55 - 63
  • [50] Multi-modal treatment strategy for achieving an aesthetic lower face
    Jeong, Tae Kwang
    Chung, Chang Ho
    Min, Kyung Hee
    ARCHIVES OF PLASTIC SURGERY-APS, 2020, 47 (03): : 256 - 262