A framework for evaluating the performance of SMLM cluster analysis algorithms

被引:0
|
作者
Daniel J. Nieves
Jeremy A. Pike
Florian Levet
David J. Williamson
Mohammed Baragilly
Sandra Oloketuyi
Ario de Marco
Juliette Griffié
Daniel Sage
Edward A. K. Cohen
Jean-Baptiste Sibarita
Mike Heilemann
Dylan M. Owen
机构
[1] University of Birmingham,Institute of Immunology and Immunotherapy, College of Medical and Dental Sciences
[2] University of Birmingham,Centre of Membrane Proteins and Receptors (COMPARE)
[3] University of Birmingham,Institute of Cardiovascular Sciences, College of Medical and Dental Sciences
[4] Université de Bordeaux,Interdisciplinary Institute for Neuroscience, CNRS, IINS, UMR 5297
[5] Université de Bordeaux,Bordeaux Imaging Center, CNRS, INSERM, BIC, UMS 3420, US 4
[6] King’s College London,Department of Infectious Diseases, School of Immunology and Microbial Sciences
[7] Helwan University,Department of Mathematics, Insurance and Applied Statistics
[8] University of Nova Gorica,Laboratory of Environmental and Life Sciences
[9] Ecole Polytechnique Fédérale de Lausanne (EPFL),Laboratory of Experimental Biophysics, Institute of Physics
[10] Ecole Polytechnique Fédérale de Lausanne (EPFL),Biomedical Imaging Group
[11] Imperial College London,Department of Mathematics
[12] Goethe-University Frankfurt,Institute of Physical and Theoretical Chemistry
[13] University of Birmingham,School of Mathematics
来源
Nature Methods | 2023年 / 20卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Single-molecule localization microscopy (SMLM) generates data in the form of coordinates of localized fluorophores. Cluster analysis is an attractive route for extracting biologically meaningful information from such data and has been widely applied. Despite a range of cluster analysis algorithms, there exists no consensus framework for the evaluation of their performance. Here, we use a systematic approach based on two metrics to score the success of clustering algorithms in simulated conditions mimicking experimental data. We demonstrate the framework using seven diverse analysis algorithms: DBSCAN, ToMATo, KDE, FOCAL, CAML, ClusterViSu and SR-Tesseler. Given that the best performer depended on the underlying distribution of localizations, we demonstrate an analysis pipeline based on statistical similarity measures that enables the selection of the most appropriate algorithm, and the optimized analysis parameters for real SMLM data. We propose that these standard simulated conditions, metrics and analysis pipeline become the basis for future analysis algorithm development and evaluation.
引用
收藏
页码:259 / 267
页数:8
相关论文
共 50 条
  • [1] A framework for evaluating the performance of SMLM cluster analysis algorithms
    Nieves, Daniel J.
    Pike, Jeremy A.
    Levet, Florian
    Williamson, David J.
    Baragilly, Mohammed
    Oloketuyi, Sandra
    de Marco, Ario
    Griffie, Juliette
    Sage, Daniel
    Cohen, Edward A. K.
    Sibarita, Jean-Baptiste
    Heilemann, Mike
    Owen, Dylan M.
    NATURE METHODS, 2023, 20 (02) : 259 - +
  • [2] A framework for evaluating the performance of cluster algorithms for hierarchical networks
    Lian, Jie
    Naik, Kshirasagar
    Agnew, Gordon B.
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2007, 15 (06) : 1478 - 1489
  • [3] A framework for evaluating mixture analysis algorithms
    Dasaratha, Sridhar
    Vignesh, T. S.
    Shanmukh, Sarat
    Yarra, Malathi
    Botonjic-Sehic, Edita
    Grassi, James
    Boudries, Hacene
    Freeman, Ivan
    Lee, Young K.
    Sutherland, Scott
    CHEMICAL, BIOLOGICAL, RADIOLOGICAL, NUCLEAR, AND EXPLOSIVES (CBRNE) SENSING XI, 2010, 7665
  • [4] A Generic Framework for Evaluating the Performance of Software Defined Radio Algorithms
    Rother, Daniel
    Jackisch, Florian
    Zoellner, Jan
    2016 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2016,
  • [5] FRAMEWORK AND PROPERTIES OF CLUSTER ALGORITHMS
    KERLER, W
    NUCLEAR PHYSICS B, 1993, : 289 - 292
  • [6] A Framework for Evaluating Motion Segmentation Algorithms
    Dreher, Christian R. G.
    Kulp, Nicklas
    Mandery, Christian
    Waechter, Mirko
    Asfour, Minim
    2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS), 2017, : 83 - 90
  • [7] A Comparative Framework for Evaluating Classification Algorithms
    Dogan, Neslihan
    Tanrikulu, Zuhal
    WORLD CONGRESS ON ENGINEERING, WCE 2010, VOL I, 2010, : 309 - 314
  • [8] A framework for evaluating image segmentation algorithms
    Udupa, JK
    LeBlanc, VR
    Ying, ZG
    Imielinska, C
    Schmidt, H
    Currie, LM
    Hirsch, BE
    Woodburn, J
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2006, 30 (02) : 75 - 87
  • [9] EVALUATING THE QUALITY OF MAXIMUM VARIANCE CLUSTER ALGORITHMS
    Rzadca, Krzysztof
    COMPUTER VISION AND GRAPHICS (ICCVG 2004), 2006, 32 : 981 - 986
  • [10] Framework for Evaluating Clustering Algorithms in Duplicate Detection
    Hassanzadeh, Oktie
    Chiangt, Fei
    Lee, Hyun Chul
    Miller, Renee J.
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2009, 2 (01):