MIA is an open-source standalone deep learning application for microscopic image analysis

被引:3
|
作者
Koerber, Nils [1 ,2 ]
机构
[1] German Fed Inst Risk Assessment BfR, German Ctr Protect Lab Anim Bf3R, Berlin, Germany
[2] Robert Koch Inst, Ctr Artificial Intelligence Publ Hlth Res, Berlin, Germany
来源
CELL REPORTS METHODS | 2023年 / 3卷 / 07期
关键词
PLATFORM;
D O I
10.1016/j.crmeth.2023.100517
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
In recent years, the amount of data generated by imaging techniques has grown rapidly, along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, the Microscopic Image Analyzer (MIA) was developed. MIA combines a graphical user interface that obviates the need for pro-gramming skills with state-of-the-art deep-learning algorithms for segmentation, object detection, and clas-sification. It runs as a standalone, platform-independent application and uses open data formats, which are compatible with commonly used open-source software packages. The software provides a unified interface for easy image labeling, model training, and inference. Furthermore, the software was evaluated in a public competition and performed among the top three for all tested datasets.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] DeepMIB: User-friendly and open-source software for training of deep learning network for biological image segmentation
    Belevich, Ilya
    Jokitalo, Eija
    PLOS COMPUTATIONAL BIOLOGY, 2021, 17 (03)
  • [42] Open-source microscopic solution for classification of biological samples
    Archibald, Robert
    Gibson, Graham M.
    Westlake, Samuel
    Kallepalli, Akhil
    FRONTIERS IN BIOPHOTONICS AND IMAGING, 2021, 11879
  • [43] MightyScreen: An Open-Source Visualization Application for Screening Data Analysis
    Wang, Longfei
    Yang, Qin
    Jaimes, Adriana
    Wang, Tianyu
    Strobelt, Hendrik
    Chen, Jenny
    Sliz, Piotr
    SLAS DISCOVERY, 2018, 23 (02) : 218 - 223
  • [44] Open-Source Web Service with Morphological Dictionary-Supplemented Deep Learning for Morphosyntactic Analysis of Czech
    Straka, Milan
    Strakova, Jana
    TEXT, SPEECH, AND DIALOGUE, TSD 2024, PT I, 2024, 15048 : 279 - 290
  • [45] DeepFrag: An Open-Source Browser App for Deep-Learning Lead Optimization
    Green, Harrison
    Durrant, Jacob D.
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2021, 61 (06) : 2523 - 2529
  • [46] New open-source software for subcellular segmentation and analysis of spatiotemporal fluorescence signals using deep learning
    Kamran, Sharif Amit
    Hossain, Khondker Fariha
    Moghnieh, Hussein
    Riar, Sarah
    Bartlett, Allison
    Tavakkoli, Alireza
    Sanders, Kenton M.
    Baker, Salah A.
    ISCIENCE, 2022, 25 (05)
  • [47] An Open-Source Deep Learning Algorithm for Efficient and Fully Automatic Analysis of the Choroid in Optical Coherence Tomography
    Burke, Jamie
    Engelmann, Justin
    Hamid, Charlene
    Reid-Schachter, Megan
    Pearson, Tom
    Pugh, Dan
    Dhaun, Neeraj
    Storkey, Amos
    King, Stuart
    Macgillivray, Tom J.
    Bernabeu, Miguel O.
    Maccormick, Ian J. C.
    TRANSLATIONAL VISION SCIENCE & TECHNOLOGY, 2023, 12 (11):
  • [48] Recent advances in the open-source ClinicaDL software for reproducible neuroimaging with deep learning
    Hassanaly, Ravi
    Brianceau, Camille
    Diaz, Mauricio
    Loizillon, Sophie
    Thibeau-Sutre, Elina
    Cassereau, Nathan
    Colliot, Olivier
    Burgos, Ninon
    MEDICAL IMAGING 2024: IMAGE PROCESSING, 2024, 12926
  • [49] An open-source deep learning model for predicting effluent concentration in capacitive deionization
    Son, Moon
    Yoon, Nakyung
    Park, Sanghun
    Abbas, Ather
    Cho, Kyung Hwa
    SCIENCE OF THE TOTAL ENVIRONMENT, 2023, 856
  • [50] DeepFlame: A deep learning empowered open-source platform for reacting flow simulations
    Mao, Runze
    Lin, Minqi
    Zhang, Yan
    Zhang, Tianhan
    Xu, Zhi-Qin John
    Chen, Zhi X.
    COMPUTER PHYSICS COMMUNICATIONS, 2023, 291