Anticipatory model of musical style imitation using collaborative and competitive reinforcement learning

被引:0
|
作者
Cont, Arshia [1 ,2 ]
Dubnov, Shlomo [2 ]
Assayag, Gerard [1 ]
机构
[1] Ircam Ctr Pompidou, UMR CNRS 9912, F-9912 Paris, France
[2] Univ Calif San Diego, Ctr Res Comp, San Diego, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The role of expectation in listening and composing music has drawn much attention in music cognition since about half a century ago. In this paper, we provide a first attempt to model some aspects of musical expectation specifically pertained to short-time and working memories, in an anticipatory framework. In our proposition anticipation is the mental realization of possible predicted actions and their effect on the perception of the world at an instant in time. We demonstrate the model in applications to automatic improvisation and style imitation. The proposed model, based on cognitive foundations of musical expectation, is an active model using reinforcement learning techniques with multiple agents that learn competitively and in collaboration. We show that compared to similar models, this anticipatory framework needs little training data and demonstrates complex musical behavior such as long-term planning and formal shapes as a result of the anticipatory architecture. We provide sample results and discuss further research.
引用
收藏
页码:285 / +
页数:3
相关论文
共 50 条
  • [41] Generalizable Human-Robot Collaborative Assembly Using Imitation Learning and Force Control
    Jha, Devesh K.
    Jain, Siddarth
    Romeres, Diego
    Yerazunis, William
    Nikovski, Daniel
    2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [42] Motor learning model using reinforcement learning with neural internal model
    Izawa, J
    Kondo, T
    Ito, K
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 3146 - 3151
  • [43] Relaxation "sweet spot" exploration in pantophonic musical soundscape using reinforcement learning
    Jayarathne, Isuru
    Cohen, Michael
    Frishkopf, Michael
    Mulyk, Gregory
    PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES: COMPANION (IUI 2019), 2019, : 55 - 56
  • [44] Collaborative Video Caching in the Edge Network using Deep Reinforcement Learning
    Lekharu, Anirban
    Gupta, Pranav
    Sur, Aridit
    Patra, Moumita
    ACM TRANSACTIONS ON INTERNET OF THINGS, 2024, 5 (03):
  • [45] Collaborative Partially-Observable Reinforcement Learning Using Wireless Communications
    Ko, Eisaku
    Chen, Kwang-Cheng
    Lien, Shao-Yu
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [46] Using feedback in collaborative reinforcement learning to adaptively optimize MANET routing
    Dowling, J
    Curran, E
    Cunningham, R
    Cahill, V
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2005, 35 (03): : 360 - 372
  • [47] SHARING REINFORCEMENT CONTINGENCIES WITH A MODEL - SOCIAL-LEARNING ANALYSIS OF SIMILARITY EFFECTS IN IMITATION RESEARCH
    BUSSEY, K
    PERRY, DG
    JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1976, 34 (06) : 1168 - 1176
  • [48] The Learning Material Classified Model Using VARK Learning Style
    Daoruang, Beesuda
    Mingkhwan, Anirach
    Sanrach, Charun
    IMPACT OF THE 4TH INDUSTRIAL REVOLUTION ON ENGINEERING EDUCATION, ICL2019, VOL 2, 2020, 1135 : 505 - 513
  • [49] Optimizing HP Model Using Reinforcement Learning
    Yang, Ru
    Wu, Hongjie
    Fu, Qiming
    Ding, Tao
    Chen, Cheng
    INTELLIGENT COMPUTING THEORIES AND APPLICATION, PT II, 2018, 10955 : 383 - 388
  • [50] Construction of conscious model using reinforcement learning
    Kozuma, M
    Taki, H
    Matsuda, N
    Miura, H
    Hori, S
    Abe, N
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 175 - 180