Cross-modality transformations in biological microscopy enabled by deep learning

被引:0
|
作者
Dana Hassan [1 ,2 ]
Jess Manuel Antnez Domnguez [1 ,3 ]
Benjamin Midtvedt [1 ]
Henrik Klein Moberg [4 ]
Jess Pineda [1 ]
Christoph Langhammer [4 ]
Giovanni Volpe [1 ]
Antoni Homs Corbera [2 ]
Caroline BAdiels [1 ]
机构
[1] University of Gothenburg, Department of Physics
[2] Cherry Biotech, Research and Development Unit
[3] Elvesys–Microfluidics Innovation Center
[4] Chalmers University of Technology, Department of
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论]; TP391.41 []; TH742 [显微镜];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ; 080203 ;
摘要
Recent advancements in deep learning(DL) have propelled the virtual transformation of microscopy images across optical modalities, enabling unprecedented multimodal imaging analysis hitherto impossible.Despite these strides, the integration of such algorithms into scientists' daily routines and clinical trials remains limited, largely due to a lack of recognition within their respective fields and the plethora of available transformation methods. To address this, we present a structured overview of cross-modality transformations, encompassing applications, data sets, and implementations, aimed at unifying this evolving field. Our review focuses on DL solutions for two key applications: contrast enhancement of targeted features within images and resolution enhancements. We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field, as well as for technology developers aiming to better grasp sample limitations and potential applications. Notably, they enable high-contrast,high-specificity imaging akin to fluorescence microscopy without the need for laborious, costly, and disruptive physical-staining procedures. In addition, they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications, such as achieving superresolution capabilities. By consolidating the current state of research in this review, we aim to catalyze further investigation and development, ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.
引用
收藏
页码:17 / 37
页数:21
相关论文
共 50 条
  • [41] Cross-modality deep learning-based prediction of TAP binding and naturally processed peptide
    Hanan Besser
    Yoram Louzoun
    Immunogenetics, 2018, 70 : 419 - 428
  • [42] Feasibility Study of Cross-Modality IMRT Auto-Planning Guided by a Deep Learning Model
    Szalkowski, G.
    Xu, X.
    Das, S.
    Yap, P.
    Lian, J.
    MEDICAL PHYSICS, 2021, 48 (06)
  • [43] Cross-Linked Unified Embedding for cross-modality representation learning
    Tu, Xinming
    Cao, Zhi-Jie
    Xia, Chen-Rui
    Mostafavi, Sara
    Gao, Ge
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [44] Mind the Gap: Learning Modality-Agnostic Representations With a Cross-Modality UNet
    Niu, Xin
    Li, Enyi
    Liu, Jinchao
    Wang, Yan
    Osadchy, Margarita
    Fang, Yongchun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 655 - 670
  • [45] Cross-modality representation learning from transformer for hashtag prediction
    Mian Muhammad Yasir Khalil
    Qingxian Wang
    Bo Chen
    Weidong Wang
    Journal of Big Data, 10
  • [46] LOCAL CROSS-MODALITY IMAGE ALIGNMENT USING UNSUPERVISED LEARNING
    BERNANDER, O
    KOCH, C
    LECTURE NOTES IN COMPUTER SCIENCE, 1990, 427 : 573 - 575
  • [47] Infrared colorization with cross-modality zero-shot learning
    Wei, Chiheng
    Chen, Huawei
    Bai, Lianfa
    Han, Jing
    Chen, Xiaoyu
    NEUROCOMPUTING, 2024, 579
  • [48] A Cross-Modality Learning Approach for Vessel Segmentation in Retinal Images
    Li, Qiaoliang
    Feng, Bowei
    Xie, LinPei
    Liang, Ping
    Zhang, Huisheng
    Wang, Tianfu
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (01) : 109 - 118
  • [49] A Cross-Modality Contrastive Learning Method for Radar Jamming Recognition
    Dong, Ganggang
    Wang, Zixuan
    Liu, Hongwei
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [50] Cross-modality Representation Interactive Learning For Multimodal Sentiment Analysis
    Huang, Jian
    Ji, Yanli
    Yang, Yang
    Shen, Heng Tao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 426 - 434