Underdetermined blind source separation using sparse representations

被引:698
|
作者
Bofill, P
Zibulevsky, M
机构
[1] Univ Politecn Catalunya, Dept Arquitectura Computadors, ES-08034 Barcelona, Spain
[2] Univ New Mexico, Dept Comp Sci, Albuquerque, NM 87131 USA
关键词
blind source separation; underdetermined source separation; sparse signal representation; potential-function clustering; l(1) norm decomposition;
D O I
10.1016/S0165-1684(01)00120-7
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The scope of this work is the separation of N sources from M linear mixtures when the underlying system is underdetermined, that is, when M < N. If the input distribution is sparse the mixing matrix can be estimated either by external optimization or by clustering and, given the mixing matrix, a minimal l(1) norm representation of the sources can be obtained by solving a low-dimensional linear programming problem for each of the data points. Yet, when the signals per se do not satisfy this assumption, sparsity can still be achieved by realizing the separation in a sparser transformed domain. The approach is illustrated here for M = 2. In this case we estimate both the number of sources and the mixing matrix by the maxima of a potential function along the circle of unit length, and we obtain the minimal l(1) norm representation of each data point by a linear combination of the pair of basis vectors that enclose it. Several experiments with music and speech signals show that their time-domain representation is not sparse enough. Yet, excellent results were obtained using their short-time Fourier transform, including the separation of up to six sources from two mixtures. (C) 2001 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:2353 / 2362
页数:10
相关论文
共 50 条
  • [31] Underdetermined Blind Audio Source Separation Using Modal Decomposition
    Abdeldjalil Aïssa-El-Bey
    Karim Abed-Meraim
    Yves Grenier
    EURASIP Journal on Audio, Speech, and Music Processing, 2007
  • [32] Underdetermined Blind Source Separation in Echoic Environments Using DESPRIT
    Thomas Melia
    Scott Rickard
    EURASIP Journal on Advances in Signal Processing, 2007
  • [33] An approach employing signal sparse representation in wavelet domain for underdetermined blind source separation
    Pomponi, E
    Squartini, S
    Piazza, F
    2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 2099 - 2104
  • [34] Underdetermined blind source separation of speech mixtures unifying dictionary learning and sparse representation
    Yuan Xie
    Kan Xie
    Shengli Xie
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 3573 - 3583
  • [35] Optimal sparse representations for blind source separation and blind deconvolution: A learning approach
    Bronstein, MM
    Bronstein, AM
    Zibulevsky, M
    Zeevi, YY
    ICIP: 2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1- 5, 2004, : 1815 - 1818
  • [36] Underdetermined Reverberant Blind Source Separation: Sparse Approaches for Multiplicative and Convolutive Narrowband Approximation
    Feng, Fangchen
    Kowalski, Mathieu
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (02) : 442 - 456
  • [37] Underdetermined blind source separation of speech mixtures unifying dictionary learning and sparse representation
    Xie, Yuan
    Xie, Kan
    Xie, Shengli
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (12) : 3573 - 3583
  • [38] Active source selection using gap statistics for underdetermined blind source separation
    Luo, Y
    Chambers, J
    SEVENTH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, VOL 1, PROCEEDINGS, 2003, : 137 - 140
  • [39] Improved DUET for Underdetermined Blind Source Separation
    Gao, Feng
    Sun, Gongxian
    Xiao, Ming
    INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2011), 2011, 8285
  • [40] Underdetermined Blind Source Separation of Bioacoustic Signals
    Hassan, Norsalina
    Ramli, Dzati Athiar
    PERTANIKA JOURNAL OF SCIENCE AND TECHNOLOGY, 2023, 31 (05): : 2257 - 2272