Balancing bias and performance in polyphonic piano transcription systems

被引:0
|
作者
Martak, Lukas Samuel [1 ,2 ]
Kelz, Rainer [1 ]
Widmer, Gerhard [1 ,2 ]
机构
[1] Johannes Kepler Univ Linz, Inst Computat Percept, Linz, Austria
[2] Johannes Kepler Univ Linz, Linz Inst Technol, Artificial Intelligence Lab, Linz, Austria
来源
基金
欧洲研究理事会;
关键词
differentiable dictionary search; non-negative matrix factorization; deep learning; normalizing flows; density models; piano music; source separation; automatic music transcription;
D O I
10.3389/frsip.2022.975932
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Current state-of-the-art methods for polyphonic piano transcription tend to use high capacity neural networks. Most models are trained "end-to-end", and learn a mapping from audio input to pitch labels. They require large training corpora consisting of many audio recordings of different piano models and temporally aligned pitch labels. It has been shown in previous work that neural network-based systems struggle to generalize to unseen note combinations, as they tend to learn note combinations by heart. Semi-supervised linear matrix decomposition is a frequently used alternative approach to piano transcription-one that does not have this particular drawback. The disadvantages of linear methods start to show when they encounter recordings of pieces played on unseen pianos, a scenario where neural networks seem relatively untroubled. A recently proposed approach called "Differentiable Dictionary Search" (DDS) combines the modeling capacity of deep density models with the linear mixing model of matrix decomposition in order to balance the mutual advantages and disadvantages of the standalone approaches, making it better suited to model unseen sources, while generalization to unseen note combinations should be unaffected, because the mixing model is not learned, and thus cannot acquire a corpus bias. In its initially proposed form, however, DDS is too inefficient in utilizing computational resources to be applied to piano music transcription. To reduce computational demands and memory requirements, we propose a number of modifications. These adjustments finally enable a fair comparison of our modified DDS variant with a semi-supervised matrix decomposition baseline, as well as a state-of-the-art, deep neural network based system that is trained end-to-end. In systematic experiments with both musical and "unmusical" piano recordings (real musical pieces and unusual chords), we provide quantitative and qualitative analyses at the frame level, characterizing the behavior of the modified approach, along with a comparison to several related methods. The results will generally show the fundamental promise of the model, and in particular demonstrate improvement in situations where a corpus bias incurred by learning from musical material of a specific genre would be problematic.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] POLYPHONIC MUSIC TRANSCRIPTION WITH SEMANTIC SEGMENTATION
    Wu, Yu-Te
    Chen, Berlin
    Su, Li
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 166 - 170
  • [42] Automatic Transcription of Polyphonic Vocal Music
    McLeod, Andrew
    Schramm, Rodrigo
    Steedman, Mark
    Benetos, Emmanouil
    APPLIED SCIENCES-BASEL, 2017, 7 (12):
  • [43] On the Use of Memory for Detecting Musical Notes in Polyphonic Piano Music
    Costantini, Giovanni
    Todisco, Massimiliano
    Perfetti, Renzo
    2009 EUROPEAN CONFERENCE ON CIRCUIT THEORY AND DESIGN, VOLS 1 AND 2, 2009, : 806 - +
  • [44] Automatic Transcription of Polyphonic Piano Music Using Genetic Algorithms, Adaptive Spectral Envelope Modeling, and Dynamic Noise Level Estimation
    Reis, Gustavo
    Fernandez de Vega, Francisco
    Ferreira, Anibal
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (08): : 2313 - 2328
  • [45] Transcription of piano recordings
    Barbancho, I
    Barbancho, AM
    Jurado, A
    Tardón, LJ
    APPLIED ACOUSTICS, 2004, 65 (12) : 1261 - 1287
  • [46] Generative model based polyphonic music transcription
    Cemgil, AT
    Kappen, B
    Barber, D
    2003 IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS PROCEEDINGS, 2003, : 181 - 184
  • [47] Genetic algorithm approach to polyphonic music transcription
    Reis, Gustavo
    Fonseca, Nuno
    Ferndandez, Francisco
    2007 IEEE INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING, CONFERENCE PROCEEDINGS BOOK, 2007, : 321 - 326
  • [48] DAFE-MSGAT: Dual-Attention Feature Extraction and Multi-Scale Graph Attention Network for Polyphonic Piano Transcription
    Cao, Rui
    Liang, Zushuang
    Yan, Zheng
    Liu, Bing
    ELECTRONICS, 2024, 13 (19)
  • [49] Short-Term Memory and Event Memory Classification Systems for Automatic Polyphonic Music Transcription
    Costantini, Giovanni
    Todisco, Massimiliano
    Perfetti, Renzo
    Basili, Roberto
    PROCEEDINGS OF THE 8TH WSEAS INTERNATIONAL CONFERENCE ON CIRCUITS, SYSTEMS, ELECTRONICS, CONTROL & SIGNAL PROCESSING (CSECS'09), 2009, : 128 - +
  • [50] Polyphonic notebook for piano. 24 preludes and fugues for piano - Wergo WER 6689 2 (R Shchedrin)
    Lesle, Lutz
    NEUE ZEITSCHRIFT FUR MUSIK, 2007, (05): : 89 - 89