IMPAIR: Massively parallel deconvolution on the GPU

被引:0
|
作者
Sherry, Michael [1 ]
Shearer, Andy [1 ]
机构
[1] Natl Univ Ireland, Digital Enterprise Res Inst, Galway, Ireland
关键词
Deconvolution; Wavelet; Denoising; Parallel; HPC; GPU; CUDA; Threading; OpenMP;
D O I
10.1117/12.2008603
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
The IMPAIR software is a high throughput image deconvolution tool for processing large out-of-core datasets of images, varying from large images with spatially varying PSFs to large numbers of images with spatially invariant PSFs. IMPAIR implements a parallel version of the tried and tested Richardson-Lucy deconvolution algorithm regularised via a custom wavelet thresholding library. It exploits the inherently parallel nature of the convolution operation to achieve quality results on consumer grade hardware: through the NVIDIA Tesla GPU implementation, the multi-core OpenMP implementation, and the cluster computing MPI implementation of the software. IMPAIR aims to address the problem of parallel processing in both top-down and bottom-up approaches: by managing the input data at the image level, and by managing the execution at the instruction level. These combined techniques will lead to a scalable solution with minimal resource consumption and maximal load balancing. IMPAIR is being developed as both a stand-alone tool for image processing, and as a library which can be embedded into non-parallel code to transparently provide parallel high throughput deconvolution.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Massively Parallel A* Search on a GPU
    Zhou, Yichao
    Zeng, Jianyang
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 1248 - 1254
  • [2] Massively Parallel GPU Memory Compaction
    Springer, Matthias
    Masuhara, Hidehiko
    PROCEEDINGS OF THE 2019 ACM SIGPLAN INTERNATIONAL SYMPOSIUM ON MEMORY MANAGEMENT (ISMM '19), 2019, : 14 - 26
  • [3] Mixture deconvolution by massively parallel sequencing of microhaplotypes
    Lindsay Bennett
    Fabio Oldoni
    Kelly Long
    Selena Cisana
    Katrina Madella
    Sharon Wootton
    Joseph Chang
    Ryo Hasegawa
    Robert Lagacé
    Kenneth K. Kidd
    Daniele Podini
    International Journal of Legal Medicine, 2019, 133 : 719 - 729
  • [4] Mixture deconvolution by massively parallel sequencing of microhaplotypes
    Bennett, Lindsay
    Oldoni, Fabio
    Long, Kelly
    Cisana, Selena
    Madella, Katrina
    Wootton, Sharon
    Chang, Joseph
    Hasegawa, Ryo
    Lagace, Robert
    Kidd, Kenneth K.
    Podini, Daniele
    INTERNATIONAL JOURNAL OF LEGAL MEDICINE, 2019, 133 (03) : 719 - 729
  • [5] GPU computing: Programming a massively parallel processor
    Buck, Ian
    CGO 2007: INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION, 2007, : 17 - 17
  • [6] Towards Massively Parallel GPU Assisted SAT
    Pantekis, Filippos
    James, Phillip
    2022 TENTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING WORKSHOPS, CANDARW, 2022, : 120 - 126
  • [7] Correction to: Mixture deconvolution by massively parallel sequencing of microhaplotypes
    Lindsay Bennett
    Fabio Oldoni
    Kelly Long
    Selena Cisana
    Katrina Maddela
    Sharon Wootton
    Joseph Chang
    Ryo Hasegawa
    Robert Lagacé
    Kenneth K. Kidd
    Daniele Podini
    International Journal of Legal Medicine, 2019, 133 : 731 - 731
  • [8] A massively parallel sequencing assay of microhaplotypes for mixture deconvolution
    Oldoni, Fabio
    Bader, Drew
    Fantinato, Chiara
    Wootton, Sharon C.
    Lagace, Robert
    Hasegawa, Ryo
    Chang, Joseph
    Kidd, Kenneth
    Podini, Daniele
    FORENSIC SCIENCE INTERNATIONAL GENETICS SUPPLEMENT SERIES, 2019, 7 (01) : 522 - 524
  • [9] Random number generators for massively parallel simulations on GPU
    M. Manssen
    M. Weigel
    A. K. Hartmann
    The European Physical Journal Special Topics, 2012, 210 : 53 - 71
  • [10] Massively Parallel Circuit Setup in GPU-SPICE
    van Santen, Victor M.
    Diep, Fu Lam Florian
    Henkel, Jorg
    Amrouch, Hussam
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (08) : 2127 - 2138