Audio-Driven Laughter Behavior Controller

被引:6
|
作者
Ding, Yu [1 ]
Huang, Jing [2 ]
Pelachaud, Catherine [3 ]
机构
[1] Univ Houston, Dept Comp Sci, Houston, TX 77204 USA
[2] Zhejiang Gongshang Univ, Sch Informat & Elect Engn, Hangzhou 310018, Zhejiang, Peoples R China
[3] Univ Paris 06, CNRS, ISIR, F-75005 Paris, France
关键词
Laughter; audio-driven; data-driven; animation synthesis; continuous-state; Kalman filter; prosody; nonverbal behaviors; virtual character; statistical framework;
D O I
10.1109/TAFFC.2017.2754365
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It has been well documented that laughter is an important communicative and expressive signal in face-to-face conversations. Our work aims at building a laughter behavior controller for a virtual character which is able to generate upper body animations from laughter audio given as input. This controller relies on the tight correlations between laughter audio and body behaviors. A unified continuous-state statistical framework, inspired by Kalman filter, is proposed to learn the correlations between laughter audio and head/torso behavior from a recorded laughter human dataset. Due to the lack of shoulder behavior data in the recorded human dataset, a rule-based method is defined to model the correlation between laughter audio and shoulder behavior. In the synthesis step, these characterized correlations are rendered in the animation of a virtual character. To validate our controller, a subjective evaluation is conducted where participants viewed the videos of a laughing virtual character. It compares the animations of a virtual character using our controller and a state of the art method. The evaluation results show that the laughter animations computed with our controller are perceived as more natural, expressing amusement more freely and appearing more authentic than with the state of the art method.
引用
收藏
页码:546 / 558
页数:13
相关论文
共 50 条
  • [21] Parametric Implicit Face Representation for Audio-Driven Facial Reenactment
    Huang, Ricong
    Lai, Peiwen
    Qin, Yipeng
    Li, Guanbin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12759 - 12768
  • [22] Audio-driven emotional speech animation for interactive virtual characters
    Charalambous, Constantinos
    Yumak, Zerrin
    van der Stappen, A. Frank
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2019, 30 (3-4)
  • [23] Partial linear regresston for audio-driven talking head application
    Hsieh, CK
    Chen, YC
    2005 IEEE International Conference on Multimedia and Expo (ICME), Vols 1 and 2, 2005, : 281 - 284
  • [24] Audio-driven Neural Gesture Reenactment with Video Motion Graphs
    Zhou, Yang
    Yang, Jimei
    Li, Dingzeyu
    Saito, Jun
    Aneja, Deepali
    Kalogerakis, Evangelos
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 3408 - 3418
  • [25] Spatially and Temporally Optimized Audio-Driven Talking Face Generation
    Dong, Biao
    Ma, Bo-Yao
    Zhang, Lei
    COMPUTER GRAPHICS FORUM, 2024, 43 (07)
  • [26] Audio2AB:Audio-driven collaborative generation of virtual character animation
    Lichao NIU
    Wenjun XIE
    Dong WANG
    Zhongrui CAO
    Xiaoping LIU
    虚拟现实与智能硬件(中英文), 2024, 6 (01) : 56 - 70
  • [27] PADVG: A Simple Baseline of Active Protection for Audio-Driven Video Generation
    Liu, Huan
    Liu, Xiaolong
    Tan, Zichang
    Li, Xiaolong
    Zhao, Yao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (06)
  • [28] Audio-Driven Stylized Gesture Generation with Flow-Based Model
    Ye, Sheng
    Wen, Yu-Hui
    Sun, Yanan
    He, Ying
    Zhang, Ziyang
    Wang, Yaoyuan
    He, Weihua
    Liu, Yong-Jin
    COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 : 712 - 728
  • [29] EmoFace: Audio-driven Emotional 3D Face Animation
    Liu, Chang
    Lin, Qunfen
    Zeng, Zijiao
    Pan, Ye
    2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES, VR 2024, 2024, : 387 - 397
  • [30] Audio-Driven Talking Face Video Generation With Dynamic Convolution Kernels
    Ye, Zipeng
    Xia, Mengfei
    Yi, Ran
    Zhang, Juyong
    Lai, Yu-Kun
    Huang, Xuwei
    Zhang, Guoxin
    Liu, Yong-Jin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 2033 - 2046