Facial Expression Editing Technology with Fused Feature Coding

被引:0
|
作者
Liu Y. [1 ]
Jin J. [1 ]
Chen L. [1 ]
Zhang J. [1 ]
机构
[1] School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang
关键词
Continuous facial expression generation; Deconvolution; Ganimation improvement; Multi-scale feature fusion;
D O I
10.12178/1001-0548.2020373
中图分类号
学科分类号
摘要
In order to solve the problems that the current continuous facial expression generation model is easy to produce artifacts in the expression-intensive areas and the expression control ability is weak, the GANimation model is improved for increasing the accuracy of the AU control of the expression muscle motor unit. A multi dimension feature fusion (MFF) module is introduced between the encoding and decoding feature layers of the generator, and the obtained fusion features are used for image decoding in a long-hop connection. A layer of inverse convolution is added to the decoding part of the generator to facilitate the addition of the MFF module to be more efficient and reasonable. Comparing experiments with the original network on the self-made data set, the accuracy of expression synthesis and the quality of the generated images of the improved model have been increased by 1.28 and 2.52 respectively, which verifies that the improved algorithm has better performance in facial expression editing when the image is not blurred and artifacts exist. © 2021, Editorial Board of Journal of the University of Electronic Science and Technology of China. All right reserved.
引用
收藏
页码:741 / 748
页数:7
相关论文
共 25 条
  • [1] ZHANG Y, BADLER N I., Synthesis of 3D faces using region-based morphing under intuitive control, Computer Animation & Virtual Worlds, 17, 3-4, pp. 421-432, (2006)
  • [2] FU T, FOROOSH H., Expression morphing from distant viewpoints, 2004 International Conference on Image Processing, pp. 3519-3522, (2004)
  • [3] LITWINOWICZ P, WILLIAMS L., Animating images with drawings, Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 409-412, (1994)
  • [4] GAO W., Synthesis of facial behavior for virtual human, Chinese Journal of Computers, 21, pp. 694-703, (1998)
  • [5] CHEN J Z, WANG J, GONG X., Face image inpainting using cascaded generative adversarial networks, Journal of University of Electronic Science and Technology of China, 48, 6, pp. 910-917, (2019)
  • [6] HE L, LI Y X, PENG B, Et al., Road extraction with UAV images based on generative adversarial networks, Journal of University of Electronic Science and Technology of China, 48, 4, pp. 580-585, (2019)
  • [7] ZHU J Y, PARK T, ISOLA P, Et al., Unpaired image-to-image translation using cycle-consistent adversarial networks, Proceedings of the IEEE International Conference on Computer Vision, pp. 2223-2232, (2017)
  • [8] HE Z, ZUO W, KAN M, Et al., AttGAN: Facial attribute editing by only changing what you want, IEEE Transactions on Image Processing, 99, (2019)
  • [9] CHEN Y C, SHEN X, LIN Z, Et al., Semantic component decomposition for face attribute manipulation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9859-9867, (2019)
  • [10] PERARNAU G, van de WEIJER J, RADUCANU B, Et al., Invertible conditional gans for image editing