BIG > Tutorials, Reviews and Recent Talks > Recent Talks 
CONTENTS 
Recent Talks 
DOCUMENTS 
Recent Talks 
Biomedical Image Reconstruction 

M. Unser 


12th European Molecular Imaging Meeting, 57 April 2017, Cologne, Germany. 

A fundamental component of the imaging pipeline is the reconstruction algorithm. In this educational session, we review the physical and mathematical principles that underlie the design of such algorithms. We argue that the concepts are fairly universal and applicable to a majority of (bio)medical imaging modalities, including magnetic resonance imaging and fMRI, xray computer tomography, and positronemission tomography (PET). Interestingly, the paradigm remains valid for modern cellular/molecular imaging with confocal/superresolution fluorescence microscopy, which is highly relevant to molecular imaging as well. In fact, we believe that the huge potential for crossfertilization and mutual reenforcement between imaging modalities has not been fully exploited yet. The prerequisite to image reconstruction is an accurate physical description of the imageformation process: the socalled forward model, which is assumed to be linear. Numerically, this translates into the specification of a system matrix, while the reconstruction of images conceptually boils down to a stable inversion of this matrix. The difficulty is essentially twofold: (i) the system matrix is usually much too large to be stored/inverted directly, and (ii) the problem is inherently illposed due to the presence of noise and/or bad conditioning of the system. Our starting point is an overview of the modalities in relation to their forward model. We then discuss the classical linear reconstruction methods that typically involve some form of backpropagation (CT or PET) and/or the fast Fourier transform (in the case of MRI). We present stabilized variants of these methods that rely on (Tikhonov) regularization or the injection of prior statistical knowledge under the Gaussian hypothesis. Next, we review modern iterative schemes that can handle challenging acquisition setups such as parallel MRI, nonCartesian sampling grids, and/or missing views. In particular, we discuss sparsitypromoting methods that are supported by the theory of compressed sensing. We show how to implement such schemes efficiently using simple combinations of linear solvers and thresholding operations. The main advantage of these recent algorithms is that they improve the quality of the image reconstruction. Alternatively, they allow a substantial reduction of the radiation dose and/or acquisition time without noticeable degradation in quality. This behavior is illustrated practically. In the final part of the tutorial, we discuss the current challenges and directions of research in the field; in particular, the necessity of dealing with large data sets in multiple dimensions: 2D or 3D space combined with time (in the case of dynamic imaging) and/or multispectral/multimodal information.




Sparsity and Inverse Problems: Think Analog, and Act Digital 

M. Unser 


IEEE International Conference on on Acoustics, Speech, and Signal Processing (ICASSP), March 2025, 2016, Shanghai, China. 

Sparsity and compressed sensing are very popular topics in signal processing. More and more researchers are relying on l1type minimization scheme for solving a variety of illposed problems in imaging. The paradigm is well established with a solid mathematical foundation, although the arguments that have been put forth in the past are deterministic and finitedimensional for the most part. In this presentation, we shall promote a continuousdomain formulation of the problem ("think analog") that is more closely tied to the physics of imaging and that also lends it itself better to mathematical analysis. For instance, we shall demonstrate that splines (which are inherently sparse) are global optimizers of linear inverse problems with totalvariation (TV) regularization constraints. Alternatively, one can adopt an infinitedimensional statistical point of view by modeling signals as sparse stochastic processes. The guiding principle it then to discretize the inverse problem by projecting both the statistical and physical measurement models onto a linear reconstruction space. This leads to the specification of a general class of maximum a posteriori (MAP) signal estimators complemented with a practical iterative reconstruction scheme ("act digital"). While the framework is compatible with the traditional methods of Tikhonov and TV, it opens the door to a much broader class of potential functions that are inherently sparse, while it also suggests alternative Bayesian recovery procedures. We shall illustrate the approach with the reconstruction of images in a variety of modalities including MRI, phasecontrast tomography, cryoelectron tomography, and deconvolution microscopy. In recent years, significant progress has been achieved in the resolution of illposed linear inverse problems by imposing l1/TV regularization constraints on the solution. Such sparsitypromoting schemes are supported by the theory of compressed sensing, which is finite dimensional for the most part. 



Sparsity and the optimality of splines for inverse problems: Deterministic vs. statistical justifications 

M. Unser 


Invited talk: Mathematics and Image Analysis (MIA'16), 1820 January, 2016, Institut Henri Poincaré, Paris, France. 

In recent years, significant progress has been achieved in the resolution of illposed linear inverse problems by imposing l1/TV regularization constraints on the solution. Such sparsitypromoting schemes are supported by the theory of compressed sensing, which is finite dimensional for the most part. In this talk, we take an infinitedimensional point of view by considering signals that are defined in the continuous domain. We claim that nonuniform splines whose type is matched to the regularization operator are optimal candidate solutions. We show that such functions are global minimizers of a broad family of convex variational problems where the measurements are linear and the regularization is a generalized form of total variation associated with some operator L. We then discuss the link with sparse stochastic processes that are solutions of the same type of differential equations.The pleasing outcome is that the statistical formulation yields maximum a posteriori (MAP) signal estimators that involve the same type of sparsitypromoting regularization, albeit in a discretized form. The latter corresponds to the loglikelihood of the projection of the stochastic model onto a finitedimensional reconstruction space.




Challenges and Opportunities in Biological Imaging 

M. Unser, Professor, Ecole Polytechnique Fédérale de Lausanne, Biomedical Imaging Group 


Plenary. IEEE International Conference on Image Processing (ICIP), 2730 September, 2015, Québec City, Canada. 

While the major achievements in medical imaging can be traced back to the end the 20th century, there are strong indicators that we have recently entered the golden age of cellular/biological imaging. The enabling modality is fluorescence microscopy which results from the combination of highly specific fluorescent probes (Nobel Prize 2008) and sophisticated optical instrumentation (Nobel Prize 2014). This has led to the emergence of modern microscopy centers that are providing biologists with unprecedented amounts of data in 3D + time. To address the computational aspects, two nascent fields have emerged in which image processing is expected to play a significant role. The first is "digital optics" where the idea is to combine optics with advanced signal processing in order to increase spatial resolution while reducing acquisition time. The second area is "bioimage informatics" which is concerned with the development of image analysis software to make microscopy more quantitative. The key issue here is reliable image segmentation as well as the ability to track structures of interest over time. We shall discuss specific examples and describe stateoftheart solutions for bioimage reconstruction and analysis. This will help us build a list of challenges and opportunities to guide further research in bioimaging. 



Sparse stochastic processes: A statistical framework for compressed sensing and biomedical image reconstruction 

M. Unser 


Plenary. IEEE International Symposium on Biomedical Imaging (ISBI), 1619 April, 2015, New York, USA. 

Sparsity is a powerful paradigm for introducing prior constraints on signals in order to address illposed image reconstruction problems. In this talk, we first present a continuousdomain statistical framework that supports the paradigm. We consider stochastic processes that are solutions of nonGaussian stochastic differential equations driven by white Lévy noise. We show that this yields intrinsically sparse signals in the sense that they admit a concise representation in a matched wavelet basis. We apply our formalism to the discretization of illconditioned linear inverse problems where both the statistical and physical measurement models are projected onto a linear reconstruction space. This leads to the specification of a general class of maximum a posteriori (MAP) signal estimators complemented with a practical iterative reconstruction scheme. While our family of estimators includes the traditional methods of Tikhonov and totalvariation (TV) regularization as particular cases, it opens the door to a much broader class of potential functions that are inherently sparse and typically nonconvex. We apply our framework to the reconstruction of images in a variety of modalities including MRI, phasecontrast tomography, cryoelectron tomography, and deconvolution microscopy. Finally, we investigate the possibility of specifying signal estimators that are optimal in the MSE sense. There, we consider the simpler denoising problem and present a direct solution for firstorder processes based on message passing that serves as our goldstandard. We also point out some of the pittfalls of the MAP paradigm (in the nonGaussian setting) and indicate future directions of research. 



Sparse stochastic processes: A statistical framework for compressed sensing and biomedical image reconstruction 

M. Unser 


4 hours tutorial, Inverse Problems and Imaging Conference, Institut Henri Poincaré, Paris, April 711, 2014. 

We introduce an extended family of continuousdomain sparse processes that are specified by a generic (nonGaussian) innovation model or, equivalently, as solutions of linear stochastic differential equations driven by white Lévy noise. We present the functional tools for their characterization. We show that their transformdomain probability distributions are infinitely divisible, which induces two distinct types of behavior‐Gaussian vs. sparse‐at the exclusion of any other. This is the key to proving that the nonGaussian members of the family admit a sparse representation in a matched wavelet basis. Next, we apply our continuousdomain characterization of the signal to the discretization of illconditioned linear inverse problems where both the statistical and physical measurement models are projected onto a linear reconstruction space. This leads the derivation of a general class of maximum a posteriori (MAP) signal estimators. While the formulation is compatible with the standard methods of Tikhonov and l1type regularizations, which both appear as particular cases, it open the door to a much broader class of sparsitypromoting regularization schemes that are typically nonconvex. We illustrate the concept with the derivation of algorithms for the reconstruction of biomedical images (deconvolution microscopy, MRI, Xray tomography) from noisy and/or incomplete data. The proposed framework also suggests alternative Bayesian recovery procedures that minimize t he estimation error. Reference




Sparse stochastic processes: A statistical framework for modern signal processing 

M. Unser 

Plenary talk, Int. Conf. Syst. Sig. Im. Proc. (IWSSIP), Bucharest, July 79, 2013. 

We introduce an extended family of sparse processes that are specified by a generic (nonGaussian) innovation model or, equivalently, as solutions of linear stochastic differential equations driven by white Lévy noise. We present the mathematical tools for their characterization. The two leading threads of the exposition are




Towards a theory of sparse stochastic processes, or when Paul Lévy joins forces with Nobert Wiener 

M. Unser 

Mathematics and Image Analysis 2012 (MIA'12), Paris, January 1618, 2012 

The current formulations of compressed sensing and sparse signal recovery are based on solid variational principles, but they are fundamentally deterministic. By drawing on the analogy with the classical theory of signal processing, it is likely that further progress may be achieved by adopting a statistical (or estimation theoretic) point of view. Here, we shall argue that Paul Lévy (1886 1971), who was one of the very early proponents of Haar wavelets, was in advance over his time, once more. He is the originator of the LévyKhinchine formula, which happens to be the perfect (nonGaussian) ingredient to support a continuousdomain theory of sparse stochastic processes. Specifically, we shall present an extended class of signal models that are ruled by stochastic differential equations (SDEs) driven by white Léevy noise. Léevy noise is a highly singular mathematical entity that can be interpreted as the weak derivative of a Lévy process. A special case is Gaussian white noise which is the weak derivative of the Wiener process (a.k.a. Brownian motion). When the excitation (or innovation) is Gaussian, the proposed model is equivalent to the traditional one. Of special interest is the property that the signals generated by nonGaussian linear SDEs tend to be sparse by construction; they also admit a concise representation in some adapted wavelet basis. Moreover, these processes can be (approximately) decoupled by applying a discrete version of the whitening operator (e.g., a finitedifference operator). The corresponding loglikelihood functions, which are nonquadratic, can be specified analytically. In particular, this allows us to uncover a L«evy processes that results in a maximum a posteriori (MAP) estimator that is equivalent to total variation. We make the connection with current methods for the recovery of sparse signals and present some examples of MAP reconstruction of MR images with sparse priors. 



Recent Advances in Biomedical Imaging and Signal Analysis 

M. Unser 

Proceedings of the Eighteenth European Signal Processing Conference (EUSIPCO'10), Ålborg, Denmark, August 2327, 2010, EURASIP Fellow inaugural lecture. 

Wavelets have the remarkable property of providing sparse representations of a wide variety of "natural" images. They have been applied successfully to biomedical image analysis and processing since the early 1990s. In the first part of this talk, we explain how one can exploit the sparsifying property of wavelets to design more effective algorithms for image denoising and reconstruction, both in terms of quality and computational performance. This is achieved within a variational framework by imposing some ℓ_{1}type regularization in the wavelet domain, which favors sparse solutions. We discuss some corresponding iterative skrinkagethresholding algorithms (ISTA) for sparse signal recovery and introduce a multilevel variant for greater computational efficiency. We illustrate the method with two concrete imaging examples: the deconvolution of 3D fluorescence micrographs, and the reconstruction of magnetic resonance images from arbitrary (nonuniform) kspace trajectories. In the second part, we show how to design new wavelet bases that are better matched to the directional characteristics of images. We introduce a general operatorbased framework for the construction of steerable wavelets in any number of dimensions. This approach gives access to a broad class of steerable wavelets that are selfreversible and linearly parameterized by a matrix of shaping coefficients; it extends upon Simoncelli's steerable pyramid by providing much greater wavelet diversity. The basic version of the transform (higherorder Riesz wavelets) extracts the partial derivatives of order N of the signal (e.g., gradient or Hessian). We also introduce a signaladapted design, which yields a PCAlike tight wavelet frame. We illustrate the capabilities of these new steerable wavelets for image analysis and processing (denoising). 



Steerable wavelet transforms and multiresolution monogenic image analysis 

M. Unser 


Engineering Science Seminar, University of Oxford, UK, January 15, 2010. 

We introduce an Nthorder extension of the Riesz transform that has the remarkable property of mapping
any primary wavelet frame (or basis) of L2(ℝ^{2}) into another "steerable" wavelet frame, while preserving
the frame bounds. Concretely, this means we can design reversible multiscale decompositions in
which the analysis wavelets (feature detectors) can be spatially rotated in any direction
via a suitable linear combination of wavelet coefficients. The concept provides a rigorous
functional counterpart to Simoncelli's steerable pyramid whose construction was entirely based
on digital filter design. It allows for the specification of wavelets with any order of steerability
in any number of dimensions. We illustrate the method with the design of new steerable polyharmonicspline
wavelets that replicate the behavior of the Nthorder partial derivatives of an isotropic Gaussian kernel.




Sampling: 60 Years After Shannon 

M. Unser 

Plenary talk, Sixteenth International Conference on Digital Signal Processing (DSP'09), Σαντορίνη (Santorini), Ελλάδα (Greece), July 57, 2009. 

The purpose of this talk is to present a modern, unifying perspective of sampling, while demonstrating
that the research in this area is still alive and well. We concentrate on the traditional setup where
the samples are taken on a uniform grid, but we explicitly take into account the nonideal nature
of the acquisition device and the fact that the measurements may be corrupted by noise. We argue
in favor of a variational formulation where the optimal signal reconstruction is specified via a functional
optimization problem. The cost to minimize is the sum of a discrete data term and a regularization
functional that penalizes nondesirable solutions. We show that, when the regularization is quadratic,
the optimal signal reconstruction (among all possible functions) is a generalized spline whose type
is tied to the regularization operator. This leads to an optimal discretization and an efficient
signal reconstruction in terms of generalized Bspline basis functions. A possible variation is
to penalize the L1norm of the derivative of the function (total variation), which can also be
achieved within the spline framework via a suitable knot deletion process.


Wavelets and Differential Operators: From Fractals to Marr's Primal Sketch 

M. Unser 

Plenary talk, proceedings of the Fourth French Biyearly Congress on Applied and Industrial Mathematics (SMAI'09), La Colle sur Loup, France, May 2529, 2009. 

Invariance is an attractive principle for specifying image processing algorithms. In this presentation, we promote affine invariance—more precisely, invariance with respect to translation, scaling and rotation. As starting point, we identify the corresponding class of invariant 2D operators: these are combinations of the (fractional) Laplacian and the complex gradient (or Wirtinger operator). We then specify some corresponding differential equation and show that the solution in the realvalued case is either a fractional Brownian field or a polyharmonic spline, depending on the nature of the system input (driving term): stochastic (white noise) or deterministic (stream of Dirac impulses). The affine invariance of the operator has two important consequences: (1) the statistical selfsimilarity of the fractional Brownian field, and (2) the fact that the polyharmonic splines specify a multiresolution analysis of L_{2}(ℝ^{2}) and lend themselves to the construction of wavelet bases. The other fundamental implication is that the corresponding wavelets behave like multiscale versions of the operator from which they are derived; this makes them ideally suited for the analysis of multidimensional signals with fractal characteristics (whitening property of the fractional Laplacian) [1]. The complex extension of the approach yields a new complex wavelet basis that replicates the behavior of the Laplacegradient operator and is therefore adapted to edge detection [2]. We introduce the Marr wavelet pyramid which corresponds to a slightly redundant version of this transform with a Gaussianlike smoothing kernel that has been optimized for better steerability. We demonstrate that this multiresolution representation is well suited for a variety of imageprocessing tasks. In particular, we use it to derive a primal wavelet sketch—a compact description of the image by a multiscale, subsampled edge map—and provide a corresponding iterative reconstruction algorithm. References: [1] P.D. Tafti, D. Van De Ville, M. Unser, "Invariances, LaplacianLike Wavelet Bases, and the Whitening of Fractal Processes," IEEE Transactions on Image Processing, vol. 18, no. 4, pp. 689702, April 2009. [2] D. Van De Ville, M. Unser, "Complex Wavelet Bases, Steerability, and the MarrLike Pyramid," IEEE Transactions on Image Processing, vol. 17, no. 11, pp. 20632080, November 2008. 



Beyond the digital divide: Ten good reasons for using splines 

M. Unser 


Seminars of Numerical Analysis, EPFL, May 9, 2010. 

"Think analog, act digital" is a motto that is relevant to scientific computing and algorithm design in a variety of disciplines,
including numerical analysis, image/signal processing, and computer graphics. 



Sampling and Approximation Theory 

M. Unser 

Plenary talk, Summer School "New Trends and Directions in Harmonic Analysis, Approximation Theory, and Image Analysis," Inzell, Germany, September 1721, 2007. 

This tutorial will explain the modern, Hilbertspace approach for the discretization (sampling) and reconstruction (interpolation) of images (in two or higher dimensions). The emphasis will be on quality and optimality, which are important considerations for biomedical applications. The main point in the modern formulation is that the signal model need not be bandlimited. In fact, it makes much better sense computationally to consider spline or waveletlike representations that involve much shorter (e.g. compactly supported) basis functions that are shifted replicates of a single prototype (e.g., Bspline). We will show how Shannon's standard sampling paradigm can be adapted for dealing with such representations. In essence, this boils down to modifying the classical "antialiasing" prefilter so that it is optimally matched to the representation space (in practice, this can be accomplished by suitable digital postfiltering). Another important issue will be the assessment of interpolation quality and the identification of basis functions (and interpolators) that offer the best performance for a given computational budget. Reference: M. Unser, "Sampling—50 Years After Shannon," Proceedings of the IEEE, vol. 88, no. 4, pp. 569587, April 2000. 



Affine Invariance, Splines, Wavelets and Fractional Brownian Fields 

M. Unser 

Mathematical Image Processing Meeting (MIPM'07), Marseilles, France, September 37, 2007. 

Invariance is an attractive principle for specifying image processing algorithms. In this work, we concentrate on affine—more precisely, shift, scale and rotation—invariance and identify the corresponding class of operators, which are fractional Laplacians. We then specify some corresponding differential equation and show that the solution (in the distributional sense) is either a fractional Brownian field (Mandelbrot and Van Ness, 1968) or a polyharmonic spline (Duchon, 1976), depending on the nature of the system input (driving term): stochastic (white noise) or deterministic (stream of Dirac impulses). The affine invariance of the operator has two remarkable consequences: (1) the statistical selfsimilarity of the fractional Brownian field, and (2) the fact that the polyharmonic splines specify a multiresolution analysis of L_{2}(ℝ^{d}) and lend themselves to the construction of wavelet bases. We prove that these wavelets essentially behave like the operator from which they are derived, and that they are ideally suited for the analysis of multidimensional signals with fractal characteristics (isotopic differentiation, and whitening property). This is joint work with Pouya Tafti and Dimitri Van De Ville. 



Splines, Noise, Fractals and Optimal Signal Reconstruction 

M. Unser 

Plenary talk, Seventh International Workshop on Sampling Theory and Applications (SampTA'07), Thessaloniki, Greece, June 15, 2007. 

We consider the generalized sampling problem with nonideal acquisition device. The task is to “optimally” reconstruct a continuouslyvarying input signal from its discrete, noisy measurements in some integershiftinvariant space. We propose three formulations of the problem—variational/Tikhonov, minimax, and minimum mean square error estimation—and derive the corresponding solutions for a given reconstruction space. We prove that these solutions are also globallyoptimal, provided that the reconstruction space is matched to the regularization operator (deterministic signal) or, alternatively, to the whitening operator of the process (stochastic modeling). Moreover, the three formulations lead to the same generalized smoothing spline reconstruction algorithm, but only if the reconstruction space is chosen optimally. We then show that fractional splines and fractal processes (fBm) are solutions of the same type of differential equations, except that the context is different: deterministic versus stochastic. We use this link to provide a solid stochastic justification of splinebased reconstruction algorithms. Finally, we propose a novel formulation of vectorsplines based on similar principles, and demonstrate their application to flow field reconstruction from nonuniform, incomplete ultrasound Doppler data. This is joint work with Yonina Eldar, Thierry Blu, and Muthuvel Arigovindan. 



Wavelet Demystified 

M. Unser 

Invited presentation, Technical University of Eindhoven, The Netherlands, May 31, 2006. 

This 2hour tutorial focuses on wavelet bases : it covers the concept of multiresolution analysis, the construction of wavelets,
filterbank algorithms, as well as an indepth discussion of fundamental wavelet properties. The presentation is progressive starting with the example
of the Haar transform and essentially self contained. 



Sampling and Interpolation for Biomedical Imaging 

M. Unser 
Part I Part II 

2006 IEEE International Symposium on Biomedical Imaging, April 69, 2006, Arlington, Virginia, USA. 

This tutorial will explain the modern, Hilbertspace approach for the discretization (sampling) and reconstruction (interpolation) of images
(in two or higher dimensions). The emphasis will be on quality and optimality, which are important considerations for biomedical applications.




Splines: A Unifying Framework for Image Processing 

M. Unser 

Plenary talk, 2005 IEEE International Conference on Image Processing (ICIP'05), Genova, Italy, September 1114, 2005. 

Our purpose is to justify the use splines in imaging applications, emphasizing their ease of use, as well as their fundamental properties. Modeling images with splines is painless: it essentially amounts to replacing the pixels by Bspline basis functions, which are piecewise polynomials with a maximum order of differentiability. The spline representation is flexible and provides the best cost/quality tradeoff among all interpolation methods: by increasing the degree, one shifts from a simple piecewise linear representation to a higher order one that gets closer and closer to being bandlimited. We will describe efficient digital filterbased algorithms for interpolating and processing images within this framework. We will also discuss the multiresolution properties of splines that make them especially attractive for multiscale processing.




Vers une théorie unificatrice pour le traitement numérique/analogique des signaux 

M. Unser 

Twentieth GRETSI Symposium on Signal and Image Processing (GRETSI'05), LouvainlaNeuve, Belgium, September 69, 2005. 

We introduce a Hilbertspace framework, inspired by wavelet theory, that provides an exact link between the traditional—discrete and analog—formulations of signal processing. In contrast to Shannon's sampling theory, our approach uses basis functions that are compactly supported and therefore better suited for numerical computations. The underlying continuoustime signal model is of exponential spline type (with rational transfer function); this family of functions has the advantage of being closed under the basic signalprocessing operations: differentiation, continuoustime convolution, and modulation. A key point of the method is that it allows an exact implementation of continuoustime operators by simple processing in the discrete domain, provided that one updates the basis functions appropriately. The framework is ideally suited for hybrid signal processing because it can jointly represent the effect of the various (analog or digital) components of the system. This point will be illustrated with the design of hybrid systems for improved AtoD and DtoA conversion. On the more fundamental front, the proposed formulation sheds new light on the striking parallel that exists between the basic analog and discrete operators in the classical theory of linear systems. 



Splines: on Scale, Differential Operators and Fast Algorithms 

M. Unser 

5th International Conference on Scale Space and PDE Methods in Computer Vision, Hofgeismar, Germany, April 610, 2005. 


© 2017 EPFL • webmaster.big@epfl.ch • 20.07.2017