Depth-Aware Generative Adversarial Network for Talking Head Video Generation

被引:64
|
作者
Hong, Fa-Ting [1 ]
Zhang, Longhao [2 ]
Shen, Li [2 ]
Xu, Dan [1 ]
机构
[1] HKUST, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] Alibaba Cloud, Hangzhou, Peoples R China
关键词
D O I
10.1109/CVPR52688.2022.00339
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Talking head video generation aims to produce a synthetic human face video that contains the identity and pose information respectively from a given source image and a driving video. Existing works for this task heavily rely on 2D representations (e.g. appearance and motion) learned from the input images. However, dense 3D facial geometry (e.g. pixel-wise depth) is extremely important for this task as it is particularly beneficial for us to essentially generate accurate 3D face structures and distinguish noisy information from the possibly cluttered background. Nevertheless, dense 3D geometry annotations are prohibitively costly for videos and are typically not available for this video generation task. In this paper, we introduce a self-supervised face-depth learning method to automatically recover dense 3D facial geometry (i.e. depth) from the face videos without the requirement of any expensive 31) annotation data. Based on the learned dense depth maps, we further propose to leverage them to estimate sparse facial keypoints that capture the critical movement of the human head. In a more dense way, the depth is also utilized to learn 3D-aware cross-modal (i.e. appearance and depth) attention to guide the generation of motion fields for warping source image representations. All these contributions compose a novel depth-aware generative adversarial network (DaGAN) for talking head generation. Extensive experiments conducted demonstrate that our proposed method can generate highly realistic faces, and achieve significant results on the unseen human faces.(1)
引用
收藏
页码:3387 / 3396
页数:10
相关论文
共 50 条
  • [21] Dynamic Depth-Aware Network for Endoscopy Super-Resolution
    Chen, Wenting
    Liu, Yifan
    Hu, Jiancong
    Yuan, Yixuan
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (10) : 5189 - 5200
  • [22] DEPTH-AWARE 3D VIDEO FILTERING TARGETTING MULTIVIEW VIDEO PLUS DEPTH COMPRESSION
    Aflaki, Payman
    Hannuksela, Miska M.
    Homayouni, Maryam
    Gabbouj, Moncef
    2014 3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON), 2014,
  • [23] PolyphonicFormer: Unified Query Learning for Depth-Aware Video Panoptic Segmentation
    Yuan, Haobo
    Li, Xiangtai
    Yang, Yibo
    Cheng, Guangliang
    Zhang, Jing
    Tong, Yunhai
    Zhang, Lefei
    Tao, Dacheng
    COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 582 - 599
  • [24] Video spatio-temporal generative adversarial network for local action generation
    Liu, Xuejun
    Guo, Jiacheng
    Cui, Zhongji
    Liu, Ling
    Yan, Yong
    Sha, Yun
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (05)
  • [25] Speech Generation by Generative Adversarial Network
    Chen, Yijia
    2021 2ND INTERNATIONAL CONFERENCE ON BIG DATA & ARTIFICIAL INTELLIGENCE & SOFTWARE ENGINEERING (ICBASE 2021), 2021, : 435 - 438
  • [26] Video deblurring using the generative adversarial network
    Shen H.
    Bian Q.
    Chen X.
    Wang Z.
    Tian X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (06): : 112 - 117
  • [27] Stable Video Style Transfer Based on Partial Convolution with Depth-Aware Supervision
    Liu, Songhua
    Wu, Hao
    Luo, Shoutong
    Sun, Zhengxing
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2445 - 2453
  • [28] Attributes Aware Face Generation with Generative Adversarial Networks
    Yuan, Zheng
    Zhang, Jie
    Shan, Shiguang
    Chen, Xilin
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1657 - 1664
  • [29] Uni-DVPS: Unified Model for Depth-Aware Video Panoptic Segmentation
    Ji-Yeon, Kim
    Hyun-Bin, Oh
    Byung-Ki, Kwon
    Kim, Dahun
    Kwon, Yongjin
    Oh, Tae-Hyun
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (07): : 6186 - 6193
  • [30] DDNet: Density and depth-aware network for object detection in foggy scenes
    Xiao, Boyi
    Xie, Jin
    Nie, Jing
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,