Facial Animation Method Based on Deep Learning and Expression AU Parameters

被引:0
|
作者
Yan Y. [1 ]
Lyu K. [1 ]
Xue J. [1 ]
Wang C. [1 ]
Gan W. [1 ]
机构
[1] School of Engineering Science, University of Chinese Academy of Sciences, Beijing
关键词
Blendshape model; Deep learning; Facial action units; Facial animation;
D O I
10.3724/SP.J.1089.2019.17682
中图分类号
学科分类号
摘要
To generate virtual characters with realistic expression more conveniently using computers, a method based on deep learning and expression AU parameters is proposed for generating facial animation. This method defines 24 facial action unit parameters, i.e. expression AU parameters, to describe facial expression; then, it constructs and trains corresponding parameter regression network model by using convolutional neural network and the FEAFA dataset. During generating facial animation from video images, video sequences are firstly obtained from ordinary monocular cameras, and faces are detected from video frames based on supervised descent method. Then, the expression AU parameters, regarded as expression blendshape coefficients, are regressed accurately from the detected facial images, which are combined with avatar's neutral expression blendshape and the 24 corresponding blendshapes to generate the animation of the digital avatar based on a blendshape model under real world conditions. This method does not need 3D reconstruction process in traditional methods, and by taking the relationship between different action units into consideration, the generated animation is more natural and realistic. Furthermore, the expression coefficients are more accurate based on face images rather than facial landmarks. © 2019, Beijing China Science Journal Publishing Co. Ltd. All right reserved.
引用
收藏
页码:1973 / 1980
页数:7
相关论文
共 30 条
  • [1] Ekman P., Friesen V.W., Facial Action Coding System: Manual, (1978)
  • [2] Cao C., Weng Y.L., Lin S., Et al., 3D shape regression for real-time facial animation, ACM Transactions on Graphics, 32, 4, (2013)
  • [3] Weng Y.L., Cao C., Hou Q.M., Et al., Real-time facial animation on mobile devices, Graphical Models, 76, 3, pp. 172-179, (2014)
  • [4] Cao C., Hou Q.M., Zhou K., Displaced dynamic expression regression for real-time facial tracking and animation, ACM Transactions on Graphics, 33, 4, (2014)
  • [5] Thies J., Zollhofer M., Stamminger M., Et al., Face2Face: real-time face capture and reenactment of RGB videos, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2387-2395, (2016)
  • [6] Li B.B., Zhang Q., Zhou D.S., Et al., Facial animation based on feature points, Telkomnika Indonesian Journal of Electrical Engineering, 11, 3, pp. 1697-1706, (2013)
  • [7] Weise T., Li H., Van-Gool L., Et al., Face/Off: live facial puppetry, Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 7-16, (2009)
  • [8] Weise T., Bouaziz S., Li H., Et al., Real-time performance-based facial animation, ACM Transactions on Graphics, 30, 4, (2011)
  • [9] Dollar P., Welinder P., Perona P., Cascaded pose regression, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1078-1085, (2010)
  • [10] Cao X.D., Wei Y.C., Wen F., Et al., Face alignment by explicit shape regression, International Journal of Computer Vision, 107, 2, pp. 177-190, (2014)