Deconvolution is one of the most common image-reconstruction tasks that arise in 3D fluorescence microscopy. The aim of this challenge is to benchmark existing deconvolution algorithms and to stimulate the community to look for novel, global and practical approaches to this problem.
The challenge will be divided into two stages: a training phase and a competition (testing) phase. It will primarily be based on realistic-looking synthetic data sets representing various sub-cellular structures. In addition it will rely on a number of common and advanced performance metrics to objectively assess the quality of the results.
The central part of the forward model is a discrete convolution of the form $$ (h*f)[\boldsymbol{k}] = \sum_{\boldsymbol{n} \in \ \mathbb{Z}^3} f[\boldsymbol{n}] \, h[\boldsymbol{k} - \boldsymbol{n}], $$ where \(f\) represents the ground-truth distribution of fluorophores and \(h\) represents the point-spread function (PSF).
The complete forward model is given by $$ g[\boldsymbol{k}] \sim Q\bigg(\mathcal{P}\Big((h*f)[\boldsymbol{k}] + b\Big) + w[\boldsymbol{k}]\bigg). $$ Here the constant \(b\) corresponds to a background signal and the notation \(\mathcal{P}(\lambda)\) represents a Poisson random variable of intensity \(\lambda\). The latter models shot noise, which is due to the quantum nature of light. The symbols \(w[\boldsymbol{k}] \sim \mathcal{N}(0, \sigma^2)\) denote IID Gaussian random variables corresponding to detector noise. Finally \(Q\) is a quantization operator defined by $$ Q(x) = \begin{cases} \arg\min_{n \in \mathbb{N}} |x - n| & \text{if } x > 0, \\ 0 & \text{otherwise} \end{cases} $$ and \(g\) represents the measured data. Note that our model is based on the simplifying assumption of a photon-counting detector. Thus it does not involve a gain parameter and the measurements \(g[\boldsymbol{k}]\) can directly be interpreted in terms of photons.
In the framework of this challenge we assume that \(f\) has finite support \([\negthinspace[\boldsymbol{0}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace]\), where \(\boldsymbol{N} = [N_x \ N_y \ N_z]^T\) and $$ [\negthinspace[\boldsymbol{M}, \boldsymbol{N}]\negthinspace] = \{M_x, \ldots, N_x\} \times \{M_y, \ldots, N_y\} \times \{M_z, \ldots, N_z\}. $$ This means that the above convolution reduces to a finite sum: $$ (h*f)[\boldsymbol{k}] = \sum_{\boldsymbol{n} \in [\negthinspace[\boldsymbol{0}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace]} f[\boldsymbol{n}] \, h[\boldsymbol{k} - \boldsymbol{n}]. $$ Each synthetic dataset provided for the challenge corresponds to measurements \(g[\boldsymbol{k}]\) taken over the same support as the ground truth: \(\boldsymbol{k} \in [\negthinspace[\boldsymbol{0}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace]\). Thus, to simulate the forward model one needs the knowledge of the PSF \(h[\boldsymbol{n}]\) for \(\boldsymbol{n} \in [\negthinspace[-\boldsymbol{N}+\boldsymbol{1}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace]\).
Important consequence: while the size of the measured image stacks that we provide is \(N_x \times N_y \times N_z\), the size of the PSFs is \((2N_x-1) \times (2N_y-1) \times (2N_z-1)\); i.e., the PSFs are roughly twice as large along every dimension.
The convolution product \(h*f\) can be represented in matrix form as \(\mathbf{Hf}\), where the vector \(\mathbf{f}\) contains the samples \(f[\boldsymbol{n}]\) in lexicographic order and \(\mathbf{H}\) is a block-Toeplitz matrix constructed from the samples \(h[\boldsymbol{n}]\). In this sense our model differs from traditional deconvolution settings where the matrix \(\mathbf{H}\) is assumed to be block-circulant.
In practice the convolution \(h*f\) can be computed using a circulant convolution involving signals whose support is twice the support of \(f\). Let us first recall the definition of a circulant convolution for signals of length \(2\boldsymbol{N}\): $$ (\tilde{h} \circledast_{2\boldsymbol{N}} \tilde{f})[\boldsymbol{k}] = \sum_{\boldsymbol{n} \in \ [\negthinspace[\boldsymbol{0}, 2\boldsymbol{N}-\boldsymbol{1}]\negthinspace]} \tilde{f}[\boldsymbol{n}] \, \tilde{h}[\boldsymbol{k} - \boldsymbol{n} \ \text{mod} \ 2\boldsymbol{N}], $$ where the modulo operation is applied componentwise. We now define \(\tilde{f}\) by zero padding \(f\), $$ \tilde{f}[\boldsymbol{n}] = \begin{cases} f[\boldsymbol{n}] & \text{if } \boldsymbol{n} \in [\negthinspace[\boldsymbol{0}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace], \\ 0 & \text{otherwise;} \end{cases} $$ and \(\tilde{h}\) by shifting and cropping \(h\), $$ \tilde{h}[\boldsymbol{n}] = \begin{cases} h[\boldsymbol{n}-\boldsymbol{N}+\boldsymbol{1}] & \text{if } \boldsymbol{n} \in [\negthinspace[\boldsymbol{0}, 2\boldsymbol{N}-\boldsymbol{2}]\negthinspace], \\ 0 & \text{otherwise.} \end{cases} $$ For \(\boldsymbol{k} \in [\negthinspace[\boldsymbol{0}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace]\) we then have $$ \begin{eqnarray*} (\tilde{h} \circledast \tilde{f})[\boldsymbol{k}+\boldsymbol{N}-\boldsymbol{1}] & = & \sum_{\boldsymbol{n} \in \ [\negthinspace[\boldsymbol{0}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace]} f[\boldsymbol{n}] \, \tilde{h}[\boldsymbol{k} + \boldsymbol{N} - \boldsymbol{1} - \boldsymbol{n} \ \text{mod} \ 2\boldsymbol{N}] \\ & = & \sum_{\boldsymbol{n} \in \ [\negthinspace[\boldsymbol{0}, \boldsymbol{N}-\boldsymbol{1}]\negthinspace]} f[\boldsymbol{n}] \, \tilde{h}[\boldsymbol{k} - \boldsymbol{n} + \boldsymbol{N} - \boldsymbol{1}], \end{eqnarray*} $$ since \(\boldsymbol{k} - \boldsymbol{n} + \boldsymbol{N} - \boldsymbol{1} \in [\negthinspace[\boldsymbol{0}, 2\boldsymbol{N}-\boldsymbol{2}]\negthinspace]\). This leads to $$ (\tilde{h} \circledast \tilde{f})[\boldsymbol{k}+\boldsymbol{N}-\boldsymbol{1}] = (h*f)[\boldsymbol{k}]. $$
The circulant convolution itself can be implemented efficiently using the FFT algorithm. We refer the participants to the documentation of the Matlab scripts for more details.
The training stage of the 2nd edition of the challenge will begin soon. Follow this link for early registration.
July 15, 2013