Music-Driven Synchronous Dance Generation Considering K-Pop Musical and Choreographical Characteristics

被引:0
|
作者
Kim, Seohyun [1 ]
Lee, Kyogu [1 ,2 ]
机构
[1] Seoul Natl Univ, Dept Intelligence & Informat, Music & Audio Res Grp, Seoul 08826, South Korea
[2] Seoul Natl Univ, Interdisciplinary Program Artificial Intelligence, Seoul 08826, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
基金
新加坡国家研究基金会;
关键词
Synchronous dance generation; K-pop group dance generation; autoregressive model; multi-step learning;
D O I
10.1109/ACCESS.2024.3420433
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Generating dance movements from music has been considered a highly challenging task, as it requires the model to comprehend concepts from two different modalities: audio and video. However, recently, research on dance generation based on deep learning has been actively conducted. Existing dance generation researches tend to focus on generating dances in limited genres or for single dancer, so when K-pop music that mixes multiple genres was applied to existing methods, they failed to generate dances of various genres or group dances. In this paper, we propose the K-pop dance generation model in an autoregressive manner, a system designed to generate two-person synchronous dances based on K-pop music. To achieve this, we created a dataset by collecting videos of multiple dancers simultaneously dancing to K-pop music and dancing in various genres. Generating synchronous dances has two meanings: one is to generate a dance that goes well with the input music and dance when both are given, and the other is to simultaneously generate multiple dances that match the given music. We call them secondary dance generation and group dance generation, respectively, and designed the proposed model, which can perform both two generation methods. In addition, we would like to propose additional learning methods to make a model that better generates synchronous dances. To assess the performance of the proposed model, both qualitative and quantitative evaluations are conducted, proving the effectiveness and suitability of the proposed model when generating synchronous dances for K-pop music.
引用
收藏
页码:94152 / 94163
页数:12
相关论文
共 8 条
  • [1] Music-Driven Dance Generation
    Qi, Yu
    Liu, Yazhou
    Sun, Quansen
    IEEE ACCESS, 2019, 7 : 166540 - 166550
  • [2] Music2Dance: DanceNet for Music-Driven Dance Generation
    Zhuang, Wenlin
    Wang, Congyi
    Chai, Jinxiang
    Wang, Yangang
    Shao, Ming
    Xia, Siyu
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (02)
  • [3] Music-Driven Animation Generation of Expressive Musical Gestures
    Bogaers, Alysha
    Yumak, Zerrin
    Volk, Anja
    COMPANION PUBLICATON OF THE 2020 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI '20 COMPANION), 2020, : 22 - 26
  • [4] CoDancers: Music-Driven Coherent Group Dance Generation with Choreographic Unit
    Yang, Kaixing
    Tang, Xulong
    Diao, Ran
    Liu, Hongyan
    He, Jun
    Fan, Zhaoxin
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 675 - 683
  • [5] Keyframe Control of Music-Driven 3D Dance Generation
    Yang, Zhipeng
    Wen, Yu-Hui
    Chen, Shu-Yu
    Liu, Xiao
    Gao, Yuan
    Liu, Yong-Jin
    Gao, Lin
    Fu, Hongbo
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2024, 30 (07) : 3474 - 3486
  • [6] Music recommendation model by analysis of listener's musical preference factor of K-pop
    Chung, Ji Yun
    Kim, Myoung Jun
    PROCEEDINGS OF THE 2018 INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND SYSTEM (ICISS 2018), 2018, : 8 - 11
  • [7] An Automatic Music-Driven Folk Dance Movements Generation Method Based on Sequence-to-Sequence Network
    Cai, Xingquan
    Xi, Mengyao
    Jia, Sichen
    Xu, Xiaowei
    Wu, Yijie
    Sun, Haiyan
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (05)
  • [8] A Music-Driven Dance Generation Method Based on a Spatial-Temporal Refinement Model to Optimize Abnormal Frames
    Wang, Huaxin
    Song, Yang
    Jiang, Wei
    Wang, Tianhao
    SENSORS, 2024, 24 (02)