Latent Dynamics for Artefact-Free Character Animation via Data-Driven Reinforcement Learning

被引:0
|
作者
Gamage, Vihanga [1 ]
Ennis, Cathy [1 ]
Ross, Robert [1 ]
机构
[1] Technol Univ Dublin, Sch Comp Sci, Dublin, Ireland
关键词
Reinforcement learning; Latent dynamics; Animation;
D O I
10.1007/978-3-030-86380-7_55
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of character animation, recent work has shown that data-driven reinforcement learning (RL) methods can address issues such as the difficulty of crafting reward functions, and train agents that can portray generalisable social behaviours. However, particularly when portraying subtle movements, these agents have shown a propensity for noticeable artefacts, that may have an adverse perceptual effect. Thus, for these agents to be effectively used in applications where they would interact with humans, the likelihood of these artefacts need to be minimised. In this paper, we present a novel architecture for agents to learn latent dynamics in a more efficient manner, while maintaining modelling flexibility and performance, and reduce the occurrence of noticeable artefacts when generating animation. Furthermore, we introduce a mean-sampling technique when applying learned latent stochastic dynamics to improve the stability of trained model-based RL agents.
引用
收藏
页码:675 / 687
页数:13
相关论文
共 50 条
  • [41] Data-driven optimal tracking control for SMA actuated systems with prescribed performance via reinforcement learning
    Liu, Hongshuai
    Cheng, Qiang
    Xiao, Jichun
    Hao, Lina
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2022, 177
  • [42] Data-driven Optimal Control Strategy for Virtual Synchronous Generator via Deep Reinforcement Learning Approach
    Yushuai Li
    Wei Gao
    Weihang Yan
    Shuo Huang
    Rui Wang
    Vahan Gevorgian
    David Wenzhong Gao
    JournalofModernPowerSystemsandCleanEnergy, 2021, 9 (04) : 919 - 929
  • [43] Data-driven transferred energy management strategy for hybrid electric vehicles via deep reinforcement learning
    Chen, Hao
    Guo, Gang
    Tang, Bangbei
    Hu, Guo
    Tang, Xiaolin
    Liu, Teng
    ENERGY REPORTS, 2023, 9 : 1098 - 1109
  • [44] A Data-driven Method for Fast AC Optimal Power Flow Solutions via Deep Reinforcement Learning
    Yuhao Zhou
    Bei Zhang
    Chunlei Xu
    Tu Lan
    Ruisheng Diao
    Di Shi
    Zhiwei Wang
    Wei-Jen Lee
    Journal of Modern Power Systems and Clean Energy, 2020, 8 (06) : 1128 - 1139
  • [45] Data-driven transferred energy management strategy for hybrid electric vehicles via deep reinforcement learning
    Chen, Hao
    Guo, Gang
    Tang, Bangbei
    Hu, Guo
    Tang, Xiaolin
    Liu, Teng
    ENERGY REPORTS, 2023, 10 : 2680 - 2692
  • [46] Robust Data-driven Model Predictive Control via On-policy Reinforcement Learning for Robot Manipulators
    Lu, Tianxiang
    Zhang, Kunwu
    Shi, Yang
    2024 IEEE 7TH INTERNATIONAL CONFERENCE ON INDUSTRIAL CYBER-PHYSICAL SYSTEMS, ICPS 2024, 2024,
  • [47] Data-driven Optimal Control Strategy for Virtual Synchronous Generator via Deep Reinforcement Learning Approach
    Li, Yushuai
    Gao, Wei
    Yan, Weihang
    Huang, Shuo
    Wang, Rui
    Gevorgian, Vahan
    Gao, David Wenzhong
    JOURNAL OF MODERN POWER SYSTEMS AND CLEAN ENERGY, 2021, 9 (04) : 919 - 929
  • [48] Data-Driven Guaranteed Cost Control Design via Reinforcement Learning for Linear Systems With Parameter Uncertainties
    Wu, Huai-Ning
    Liu, Zhou-yang
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (11): : 4151 - 4159
  • [49] A Data-driven Method for Fast AC Optimal Power Flow Solutions via Deep Reinforcement Learning
    Zhou, Yuhao
    Zhang, Bei
    Xu, Chunlei
    Lan, Tu
    Diao, Ruisheng
    Shi, Di
    Wang, Zhiwei
    Lee, Wei-Jen
    JOURNAL OF MODERN POWER SYSTEMS AND CLEAN ENERGY, 2020, 8 (06) : 1128 - 1139
  • [50] Data-Driven Passivity Analysis and Fault Detection Using Reinforcement Learning
    Ma, Haoran
    Zhao, Zhengen
    Li, Zhuyuan
    Yang, Ying
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2024,