2024
From analog to digital: The unifying role of splines in Science and Engineering
Michael Unser
Meeting • 2024-01-19 • 00369
Abstract:
A Box-Spline Framework to Solve Inverse Problems with Sparsity Constraints in the Continuum
Mehrsa Pourya
Meeting • 2024-01-20 • 00370
Abstract:
Fluorescent Chemical Sensors
Aleix Boquet
Meeting • 2024-01-20 • 00371
Abstract:
Regularizing Inverse Problems with Generative Models
Martin Zach, Technische Universität Graz
Meeting • 2024-03-04 • 00373
Abstract:
Random ReLU Neural Networks as Non-Gaussian Processes
Rahul Parhi
Meeting • 2024-04-09 • 00374
Abstract:
To be defined
Jonathan Dong
Seminar • 2024-04-23 • 00393
Abstract:
Noise2VST: A novel zero-shot framework for real-world image denoising
Sébastien Herbreteau, EPFL
Meeting • 2024-06-25 • 00394
Abstract:
Revisiting PSF models: theory and implementation
Jonathan Dong, EPFL
Meeting • 2024-07-12 • 00395
Abstract:
Variational Patch-Based Sparse Dictionary Learning Model for Image Reconstruction
Stanislas Ducotterd , EPFL
Seminar • 2024-08-08 • 00396
Abstract:
Towards interpretable neural networks for low-level vision tasks
Luis Albert Zavala-Mondragon, Post-doctoral researcher at the Eindhoven University of Technology in the Netherlands
Seminar • 2024-10-07 • 00397
Abstract:
SplineOps: an open-source library for signal processing operations using splines
Pablo Garcia-Amorena, Scientific Assistant at EPFL
Meeting • 2024-09-24 • 00398
Abstract:
Exploring Symmetry in Parseval CNNs: Smarter Filters for Image Processing
Borna Khodabandeh, External Student at EPFL
Meeting • 2024-09-27 • 00399
Abstract:
2023
Pycsou: High Performance Computational Imaging with Python
Matthieu Simeoni
Seminar • 2023-03-28 • 00360
Abstract:
Orientation Estimation in 3D for Steerable Filters
Anshuman Sinha
Meeting • 2023-04-18 • 00361
Abstract:
A Neural-Network-Based Convex Regularizer for Image Reconstruction
Alexis Goujon
Meeting • 2023-05-02 • 00362
Abstract:
Convergence of grid-based inverse problems toward their continuous limit
Julien Fageot
Meeting • 2023-07-11 • 00363
Abstract:
Reconstructing Shapes and Motions from Unposed Images: From Molecular Complexes to Neural Radiance Fields
Axel Levy, Stanford University
Meeting • 2023-08-31 • 00364
Abstract:
Towards better conditioned and interpretable neural networks: a study of the normalization-equivariance property with application to image denoising
Sebastien Herbreteau, Centre Inria de l'Université de Rennes
Meeting • 2023-10-06 • 00365
Abstract:
Photoacoustic meets photoswitching
Yan Liu
Meeting • 2023-10-24 • 00366
Abstract:
2022
On Radon-Domain BV Spaces: The Native Spaces for Shallow Neural Networks
Rahul Parhi
Meeting • 2022-09-27 • 00354
Abstract:
Exact Discretization of First and Second-order Total Variation using Box Splines
Mehrsa Pourya
Meeting • 2022-10-24 • 00355
Abstract:
High-Speed Fourier Ptychography with Deep Spatiotemporal Priors
Pakshal Bohra
Meeting • 2022-11-08 • 00356
Abstract:
Improving Lipschitz-Constrained Neural Networks by Learning Activation Functions
Stanislas Ducotterd
Meeting • 2022-11-22 • 00357
Abstract:
How can deformations help to reconstruct images?
Sebastian Neumayer
Meeting • 2022-12-06 • 00358
Abstract:
Blind deconvolution: applications and algorithms
Jonathan Dong
Meeting • 2022-12-20 • 00359
Abstract:
On different faces of the representer theorem, non-euclidean geometries and randomness
Vincent Guillemet
Meeting • 2022-07-12 • 00367
Abstract:
100μPET
Aleix Bouquet
Meeting • 2022-06-27 • 00368
Abstract:
Expressiveness of CPWL networks: the role of depth, width and activation complexity
Alexis Goujon
• 2022-02-21 • 00387
Abstract:
Continuity of image reconstruction with Lp regularization
Pol del Aguila Pla
• 2022-03-14 • 00388
Abstract:
Sebastian Neumayer
• 2022-04-25 • 00389
Abstract:
The expressive power of Graph Neural Networks: a unifying point of view
Giuseppe Alessio D’Inverno
• 2022-05-16 • 00390
Abstract:
Fast and accurate simulation of linear imaging systems
Dimitris Perdios
• 2022-05-30 • 00391
Abstract:
Turning morphology into numbers to learn some things about life
Virginie Uhlmann
• 2022-09-06 • 00392
Abstract:
2021
Optical Diffraction Tomography with Single-Molecule Localization Microscopy
Thanh-An Pham
Meeting • 08 February 2021 • 00341
Abstract: Single-molecule localization microscopy~(SMLM) is a fluorescence microscopy technique that achieves super-resolution imaging by sequentially activating and localizing random sparse subsets of fluorophores. Each activated fluorophore emits light that then scatters through the sample, thus acting as a source of illumination from inside the sample. Hence, the sequence of SMLM frames carries information on the distribution of the refractive index of the sample. In the first part, we explore the possibility of exploiting this information to recover the refractive index of the imaged sample, given the localized molecules. Our results with simulated data suggest that it is possible to exploit the phase information that underlies the SMLM data. In the second part, we refine the positions and the intensity of the fluorophore as well. Consequently, our joint-optimisation scheme improves the recovery of the refractive index and the SMLM localization.
Pycsou: A Python 3 package for solving linear inverse problems with state-of-the-art proximal algorithms
Matthieu Simeoni
Meeting • 22 February 2021 • 00342
Abstract: Matthieu will talk about Pycsou: a Python 3 package for solving linear inverse problems with state-of-the-art proximal algorithms. Similarly to GlobalBioIm, Pycsou implements in a highly modular way the main building blocks -cost functionals penalty terms and linear operators- of generic penalised convex optimisation problems. The main features of the package are: It offers a rich collection of linear operators, loss functionals and penalty functionals commonly used in practice. It implements arithmetic operations for linear operators, loss functionals and penalty functionals, hence allowing to add, subtract, scale, compose, exponentiate or stack those various objects with one another and hence quickly design custom complex optimisation problems. It implements a rich collection of state-of-the-art iterative proximal algorithms, including efficient primal-dual splitting methods which involve only gradient steps, proximal steps and simple linear evaluations. It supports matrix-free linear operators, making it easy to work with large scale linear operators that may not necessarily fit in memory. Matrix-free linear operators can be implemented from scratch by subclassing the abstract class LinearOperator, or built from Scipy sparse matrices, distributed Dask arrays or Pylops matrix-free operators (which now support GPU computations). It implements automatic differentiation/proximation rules, allowing to automatically compute the derivative/proximal operators of functionals constructed from arithmetic operations on common functionals shipped with Pycsou. It leverages powerful rule-of-thumbs for setting automatically the hyper-parameters of the provided proximal algorithms. Pycsou is designed to easily interface with the Python packages scipy.sparse and Pylops. This allows to use the sparse linear algebra routines from scipy.sparse on Pycsou LinearOperator, and benefit from the large catalogue of linear operators and solvers from Pylops.
Inverse problems for image-based characterisation of cellular mechanics: how do cells move?
Aleix Boquet
Meeting • 23 February 2021 • 00343
Abstract: While intracellular mechanics are essential to biological function, standard physical probes are too invasive to provide physiologically relevant insight. To measure the internal biophysical quantities necessary to the study of cell migration with microscopy imaging alone, we propose combining optical flow and continuum models into a single Bayesian PDE-constrained framework. This formulation transforms pixel intensity directly into physical measurements in the context of probability distributions. In particular, the posterior mean is an inverse problem that tracks image movement while satisfying a physical model, thus yielding estimates of the variables therein; whereas the posterior covariance derives measurement error out of image noise. To make this approach tractable, we exploit the dual space via the adjoint method, and rely on the compactness of the Hessian to work with low-rank approximations while assuring scale-independent convergence. We first test our method by reformulating image-based techniques in other domains such as traction force microscopy and elastography, increasing the accuracy of their measurements and providing error bounds, as well as generalising their boundary conditions. We then use our framework to study the cytoplasm in cell videos via fluid dynamics, obtaining unprecedented estimates of intracellular pressure gradients and forces that reconcile and extend multiple reports on cell migration.
Graphic: Graph-Based Hierarchical Clustering for Single-Molecule Localization Microscopy
Mehrsa Pourya
Meeting • 01 March 2021 • 00344
Continuous-Domain Formulation of Inverse Problems for Composite Sparse Plus Smooth Signals
Thomas Debarre
Meeting • 08 March 2021 • 00345
Abstract: We present a novel framework for reconstructing 1D composite signals, where one component is sparse and the other is smooth, based on a finite number of linear measurements. We formulate the reconstruction problem as a continuous-domain regularized inverse problem with multiple penalties, and we prove that solutions of this problem are of the desired form (i.e., the sum of a sparse and smooth components). We then discretize this problem using Riesz bases, which yields a discrete problem that can be solved using standard algorithms. This discretization is exact in the sense that we are solving the continuous-domain problem over the search space specified by our bases without any discretization error. We propose a complete algorithmic pipeline and demonstrate its feasibility on some simulated data.
Learning continuous and piecewise-linear functions and measuring their complexity
Joaquim Campos
Meeting • 29 March 2021 • 00346
Abstract: In this talk, I will discuss practical applications of the Hessian-Nuclear Total-Variation (HTV) semi-norm. The HTV functional bears a strong resemblance to second-order total variation in 1D. In particular, it also admits a closed-form expression for continuous and piecewise-linear (CPWL) functions and has a similar sparsity-promoting effect. These characteristics motivate us to develop an HTV-regularized learning framework based on a CPWL search space. In this manner, the infinite-dimensional learning problem can be exactly recast as a finite-dimensional one, which can be efficiently solved. Through numerical examples, we show that our algorithm constructs CPWL models with few facets. In the second part, I will briefly discuss the ongoing project on the use of the HTV to measure the complexity of CPWL models, with special focus on ReLU networks.
Inverse problems in optical projection tomography: reconstruction and calibration.
Yan Liu
Meeting • 12 April 2021 • 00348
Abstract: Optical projection tomography produces high-resolution 3D images of fluorescent or nonfluorescent samples. However, the reconstructed OPT images sometimes suffer from a variety of artefacts in real experiments. Artefacts caused by mechanical errors in the imaging system can be corrected through calibration. We propose a 3D mathematical forward model that characterizes different types of mechanical errors including translation of the sample, tilt and wobble of the rotation axis, as well as the angular increment imprecision during rotation. A joint reconstruction-calibration framework is implemented to obtain a good estimate of the system parameters. Numerical simulations are carried out to reproduce and identify the cause of different artefacts. Simulation studies also show that by using the calibrated system parameters, artefacts are successfully removed from the reconstruction.
Ultrasound Imaging: From Physical Modeling to Deep Learning
Dimitris Perdios
Meeting • 26 April 2021 • 00349
My Life and Crimes in Electron Microscopy
Jasenko Zivanov
Meeting • 03 May 2021 • 00350
Abstract: I will recount my professional journey from computer vision into electron microscopy. I will use the opportunity to describe different aspects of electron microscopy, focussing on the electron optics of high-resolution electron cryo-microscopy (cryo-EM). Special emphasis will be placed on a number of computational methods that are necessary to reach true atomic resolution.
Wavelet Compressibility of Compound Poisson Processes
Shayan Aziznejad
Meeting • 17 May 2021 • 00351
TD-DIP: A Versatile Tool for Dynamic Imaging and its Application to Fast Structured Illumination Microscopy (SIM)
Jaejun Yoo
Meeting • 31 May 2021 • 00352
Coupled Splines for Sparse Curve Fitting
Icíar Lloréns Jover
Meeting • 07 June 2021 • 00353
Abstract: In this talk, we show how to construct sparse parametric continuous curve models to fit a sequence of contour points using an inverse problem formulation. Our prior, enforced into our model as a regularization term, is motivated by our need for rotation invariance and sparsity. We extend our problem formulation to curves made of two distinct components with complementary smoothness properties. Both tasks can be solved using B-splines as interpolating functions. We illustrate the performance of our model on different contours having different smoothness properties. Our experimental results show that we can faithfully reconstruct any general contour using few parameters despite possible imprecisions in the measurements.
Fundamental bounds on the precision of phase and coherent localization microscopy
Jonathan Dong
• 2021-07-19 • 00375
Abstract:
Bayesian Inversion for Nonlinear Imaging Models using Neural Networks
Pakshal Bohra
• 2021-08-02 • 00376
Abstract:
Stable Parametrization of Continuous and Piecewise Linear Functions
Alexis Goujon
• 2021-09-06 • 00377
Abstract:
Analysis by Compression: An end-to-end approach to cryo-EM
Jasenko Zivanov
• 2021-09-13 • 00378
Abstract:
Spline Density Functions
Pol del Aguila Pla
• 2021-09-27 • 00379
Abstract:
Exploiting local regularity properties to boost and expand safe-screening
Emmanuel Soubies
• 2021-10-11 • 00380
Abstract:
The troublesome kernel - on AI generated hallucinations in deep learning for inverse problems
Nina Gottschling
• 2021-10-29 • 00381
Abstract:
Efficient and Robust Multigrid Based Physical Model for Diffraction Tomography
Thanh-an Pham
• 2021-11-08 • 00382
Abstract:
Ultrafast structured illumination microscopy for time-varying specimens using sparsity-promoting regularization
Thomas Debarre
• 2021-11-29 • 00383
Abstract:
Delaunay Based Continuous and Piecewise-Linear Function Learning With Hessian Total-Variation Regularization
Mehrsa Pourya
• 2021-12-13 • 00384
Abstract:
The Radon transform and neural networks
Michael Unser
• 2021-12-17 • 00385
Abstract:
Structured low-rank methods for MRI reconstruction
Xinlin Zhang
• 2021-12-20 • 00386
Abstract:
2020
Solving various domain translation problems using deep convolutional framelets
Jaejun Yoo
Meeting • 11 February 2020 • 00322
Abstract: Domain translation is a general category that subsumes various problems, such as image-to-image translation, style transfer, super-resolution and even inverse problems in some sense. In this talk, I first introduce deep convolutional framelets, which is the main tool we used to solve domain translation problems. My recent works on photorealistic style transfer (ICCV '19) and inverse scattering problems (SIAM '18, TMI '19) will be presented. I provide a sketch of ideas behind the theory, which bridges the relationship between the signal processing and the U-Net type architectures that are prevalent in recent deep learning studies. Based on these understandings, we provide a simple but effective correction to a network architecture that is not only theoretically sound but remarkably enhancing the performance in practice.
Robust Reconstruction of Fluorescence Molecular Tomography With An Optimized Illumination Pattern
Yan Liu, ETH
Meeting • 04 March 2020 • 00323
Abstract: Fluorescence molecular tomography (FMT) is an emerging powerful tool for biomedical research. There are two factors that influence FMT reconstruction most effectively. The first one is regularization techniques. For this, we replace traditional Tikhonov regularization with sparse regularization to improve reconstruction quality. The second one is the illumination pattern. We take advantage of the discrete formulation of the forward problem to define an illumination pattern and the admissible set of patterns. Then we add restrictions in the admissible set as different types of regularizers to a discrepancy functional with the illumination pattern as unknown and the reconstruction result as prior information, generating another inverse problem. The computed optimal illumination pattern is then used for the next round of reconstruction. To sum, we combine reconstruction with illumination pattern optimization to form a two-step approach which improves the quality of the reconstructed image in phantom simulations
Rethinking Data Augmentation for Low-level Vision Tasks: A Comprehensive Analysis and A New Strategy CutBlur
Jaejun Yoo
Meeting • 23 March 2020 • 00324
Abstract: Data augmentation is an effective way to improve the performance of deep networks. Unfortunately, current methods are mostly developed for high-level vision tasks (e.g., classification) and few are studied for low-level vision tasks (e.g., image restoration). In this paper, we provide a comprehensive analysis of the existing augmentation methods applied to the super-resolution task. We find that the methods discarding or manipulating the pixels or features too much hamper the image restoration, where the spatial relationship is very important. Based on our analyses, we propose CutBlur that cuts a low-resolution patch and pastes it to the corresponding high-resolution image region and vice versa. The key intuition of CutBlur is to enable a model to learn not only ``how" but also ``where" to super-resolve an image. By doing so, the model can understand ``how much", instead of blindly learning to apply super-resolution to every given pixel. Our method consistently and significantly improves the performance across various scenarios, especially when the model size is big and the data is collected under real-world environments. We also show that our method improves other low-level vision tasks, such as denoising and compression artifact removal.
Gibbs Sampling-Based Statistical Inference for Inverse Problems
Pakshal Bohra
Meeting • 20 April 2020 • 00325
CryoGAN: A New Reconstruction Paradigm for Single-particle Cryo-EM Via Deep Adversarial Learning
Laurène Donati
Meeting • 27 April 2020 • 00326
Abstract: In this talk, we present CryoGAN, a new paradigm for single-particle cryo-EM reconstruction based on unsupervised deep adversarial learning. The major challenge in single-particle cryo-EM is that the measured particles have unknown poses. Current reconstruction techniques either estimate the poses or marginalize them awaysteps that are computationally challenging. CryoGAN sidesteps this problem by using a generative adversarial network (GAN) to learn the 3D structure whose simulated projections most closely match the real data in a distributional sense. The architecture of CryoGAN resembles that of standard GAN, with the twist that the generator network is replaced by a cryo-EM physics simulator. CryoGAN is an unsupervised algorithm that only demands picked particle images and CTF estimation as inputs; no initial volume estimate or prior training are needed. Moreover, it requires minimal user interaction and can provide reconstructions in a matter of hours on a high-end GPU. The current results on synthetic datasets show that the CryoGAN can reconstruct a high-resolution volume with its adversarial learning scheme. Preliminary results on real β-galactosidase data demonstrate its ability to capture and exploit real data statistics in more challenging imaging conditions. If the time permits, we would also like to discuss its extension for multiple conformations.
CryoGAN: A New Reconstruction Paradigm for Single-particle Cryo-EM Via Deep Adversarial Learning
Harshit Gupta
Seminar • 27 April 2020 • 00327
Abstract: In this talk, we present CryoGAN, a new paradigm for single-particle cryo-EM reconstruction based on unsupervised deep adversarial learning. The major challenge in single-particle cryo-EM is that the measured particles have unknown poses. Current reconstruction techniques either estimate the poses or marginalize them away steps that are computationally challenging. CryoGAN sidesteps this problem by using a generative adversarial network (GAN) to learn the 3D structure whose simulated projections most closely match the real data in a distributional sense. The architecture of CryoGAN resembles that of standard GAN, with the twist that the generator network is replaced by a cryo-EM physics simulator. CryoGAN is an unsupervised algorithm that only demands picked particle images and CTF estimation as inputs; no initial volume estimate or prior training are needed. Moreover, it requires minimal user interaction and can provide reconstructions in a matter of hours on a high-end GPU. The current results on synthetic datasets show that the CryoGAN can reconstruct a high-resolut ion volume with its adversarial learning scheme. Preliminary results on real β-galactosidase data demonstrate its ability to capture and exploit real data statistics in more challenging imaging conditions. If the time permits, we would also like to discuss its extension for multiple conformations.
Matrix factorization and phase retrieval for deep fluorescence microscopy
Jonathan Dong, LKB ENS
Meeting • 11 May 2020 • 00328
Abstract: Imaging deep inside biological tissues remains a hard challenge nowadays due to multiple light scattering, but it could enable us to observe the activity of individual neurons in the brain. Leveraging recent algorithmic advances, we show how to use matrix factorization and phase retrieval for fluorescence microscopy, as an example of the fruitful between optics and non-linear optimization.
Space Varying Blurs: Estimation, Identification and Applications
Valentin Debarnot
Meeting • 18 May 2020 • 00329
Abstract: Standard approaches in microscopy generally assume that the the optical blur is stationary in the field of view. This assumption is generally not valid, especially when looking at large fields of view. In this presentation, we will show how to estimate a subspace of spatially variable blur operator that characterizes a given microscope. In a second step, we will show how to use this subspace to solve blind deblurring problems in microscopy. We will present two approaches, one that exploits the fact that the signal is composed of beads, the second that uses neural networks.
Robust Phase Unwrapping via Deep Image Prior for Quantitative Phase Imaging
Fangshu Yang
Meeting • 22 June 2020 • 00330
Abstract: Phase unwrapping plays an important role for quantitative phase imaging. With the biological specimens such as organoids becoming more complex, the corresponding problem of unwrapping has become more challenging. Recently, deep-learning-based frameworks have achieved the unprecedented performance in a variety of applications; unfortunately, the end-to-end supervised-learning approaches need large representative training sets which are difficult to acquired for complex biological samples. In this talk, we present a robust and versatile framework inspired by the concept of deep image prior (DIP) for 2D phase unwrapping (PUDIP). We experimentally demonstrate the proposed method is able to faithfully unwrap the phase images on both real and simulated data without ground-truth.
Measuring Complexity of Deep Neural Networks
Shayan Aziznejad
Meeting • 29 June 2020 • 00331
Convex Optimization in Infinite Sums of Banach Spaces Using Besov Regularization
Benoît Sauty De Chalon
Meeting • 13 July 2020 • 00332
Abstract: We caracterize the solutions of a broad class of convex optimization problems for the recovery of a function from a finite number of linear measurements. We take interest in the case where the solution is decomposable as $f=\sum_{n\in\mathbb{N}} f_n$ in an infinite amount of subcomponents, where each component belongs to a prescribed Banach space $\mathcal{B}_n$, while ensuring the problem is well posed by penalizing some composite norm of the solution. We begin by deriving a general representer theorem that states conditions for existence of solutions and gives the parametric representation of the solution components. Namely, they can be decomposed as a sparse sum of extremal points of the unit balls of the $\mathcal{B}_n$ spaces. Then, we apply this framework by studying functions that can be decomposed in a multi-resolution wavelet decomposition using the well known Shannon wavelet and a fitting regularization norm inspired by Besov spaces to obtain a generalized sparse dictionnary learning technique.
Shortest Multi-spline Bases for Generalized Sampling
Alexis Goujon
Meeting • 03 August 2020 • 00333
Abstract: Generalized sampling consists in recovering a function f from the samples of its response to N>=1 linear shift-invariant systems. Relevant reconstruction spaces include finitely generated shift-invariant spaces that are able to reproduce polynomials up to a given degree M. While this property guarantees an approximation power of order (M+1), it comes at a price: we prove that the sum of the size of the support of the generators is necessarily greater or equal than (M+1). When there is equality, the generating functions constitute a shortest support basis that is perfectly suited for applications since it minimizes the computation cost and, in addition, it necessarily forms a Riesz basis. Interestingly, for any multi-spline space $S_{n_1}+...+S_{n_N}$, a shortest-support basis can be constructed recursively, which generalises the well-known B-splines. These theoretical results pave the way for exciting applications, such as derivative sampling with arbitrarily high approximation power.
Time-dependent deep image prior for dynamic MRI
Jaejun Yoo
Meeting • 08 September 2020 • 00334
Abstract: In this seminar, I would like to share our recent work on reconstructing dynamic MRI images. I will also share my experience and some (among numerous others) of my failed attempts that might be useful in your project. We introduce a novel unsupervised deep-learning-based image reconstruction algorithm for dynamic magnetic resonance imaging (MRI). To accelerate the magnetic resonance imaging (MRI), every existing method relies on a partial sampling of the k-space to reduce the acquisition time. To compensate the information loss due to this partial sampling, they either exploit a hand-crafted prior such as sparsity in certain transform domains (compressed sensing) or use a neural network to learn a mapping from a partially sampled data to fully sampled data (supervised learning), which are expensive to acquire and generally unavailable. Unlike the previous approaches, our method learns to encode the temporal redundancy of the measurements and decode the corresponding image sequences using a strong structural prior of convolutional neural networks (CNNs). By carefully designing a latent space and optimizing CNNs, our method improves the reconstructed image quality by 3 dB from the previous state of the art in a fully unsupervised manner.
Inverse Problems with Fourier-Domain Measurements and gTV Regularization: uniqueness and reconstruction algorithm
Thomas Debarre
Meeting • 22 September 2020 • 00335
Abstract: We study the super-resolution problem of recovering a sparse periodic continuous-domain function from its low-frequency information. We provide a new analysis of constrained optimization problems with total-variation (TV) regularization over Radon measures. In particular, we demonstrate that the solution is not necessarily unique, and we identify a general sufficient condition for uniqueness, expressed in terms of the Fourier-domain measurements. We then apply this result to prove that when the TV regularization includes a derivative operator of any order (generalized TV), the solution is always unique and is a periodic spline. We propose an adaptation of the sliding Frank-Wolfe algorithm for spline reconstruction with generalized TV regularization in a noisy setting.
Optimal transport-based metric for single-molecule localization microscopy (SMLM)
Pol del Aguila Pla
Meeting • 20 October 2020 • 00336
Wavelets in harmonic analysis and signal processing
Michael Unser
Meeting • 27 October 2020 • 00337
A Hybrid Stochastic Framework for Signal Recovery
Pakshal Bohra
Meeting • 10 November 2020 • 00338
Abstract: We construct a stochastic framework based on hybrid continuous-domain models for the derivation of algorithms that reconstruct multicomponent signals from noisy linear measurements. The hybrid models that we consider involve the superposition of elementary sparse processes which are solutions of linear stochastic differential equations driven by white Lévy noise. We derive a hybrid MAP estimator for the discretized models, and this results in a family of estimators that is consistent with some popular multi-penalty regularization schemes. We present an efficient ADMM implementation and illustrate the advantages of hybrid models with concrete examples.
Robust and Sparse Regression Models for One-Dimensional Data
Shayan Aziznejad
Meeting • 08 December 2020 • 00339
Recent algorithmic advances in Phase Retrieval
Jonathan Dong
Meeting • 22 December 2020 • 00340
Abstract: Phase Retrieval is a longstanding problem in imaging, arising in astronomy, microscopy, or computer-generated holography. This non-linear equation y = |Ax|² is non-convex and difficult to solve, with many different algorithms proposed in physics. On the other hand, this equation may also be seen as the simplest form of non-linear neural network, a one-layer network with quadratic activation. This observation has generated a considerable amount of theoretical studies in the past 5 years to understand this computational problem better. In particular, two new classes of algorithms have been developed in my previous group and will be presented, spectral methods and Approximate-Message Passing. I apologize in advance for the absence of splines in this presentation, but hope we can find a remedy together in future research projects.
Shortest Multi-Spline Bases for Generalized Sampling
Alexis Goujon
Meeting • 24 November 2020 • 00347
Abstract: Generalized sampling consists in the recovery of a function from the samples of its response to a collection of linear shift-invariant systems. The reconstructed function is typically chosen from a finitely generated shift-invariant space that can reproduce polynomials up to a given degree M. While this property guarantees an approximation power of order (M + 1), it comes with a tradeoff on the size of the support of the basis functions. Specifically, we prove that the sum of the supports of the generators is necessarily not smaller than (M + 1). Following this result, we introduce the notion of shortest basis of degree M, which is motivated by our desire to minimize the computational costs. We then demonstrate that any basis of shortest support generates a Riesz basis. Finally, we introduce a recursive algorithm to construct the shortest-support basis for any multi-spline space. It constitutes in a generalization of both polynomial and Hermite B-splines. This framework paves the way for novel applications such as fast derivative sampling with arbitrarily high approximation power.
2019
Inner-Loop-Free ADMM for Cryo-EM
Laurène Donati
Meeting • 15 January 2019 • 00301
Abstract: Thanks to recent advances in signal processing, the interest for fast l1-regularized reconstruction algorithms in cryo-electron microscopy (cryo-EM) has intensified. The approaches based on the alternating-direction of multipliers method (ADMM) are particularly well-suited due to the prime convergence speed and flexibility of use of this algorithm. Yet, the standard ADMM scheme still relies on a nested conjugate gradient (CG) to solve the linear step in its alternating-minimization procedure, which can be costly when handling large-scale problems. In this work, we present an inner- loop-free ADMM algorithm for 3D reconstruction in cryo-EM. By using an appropriate splitting scheme, we are able to avoid the use of CG for solving the linear step. This leads to a substantial increase in algorithmic speed, as demonstrated by our experiments.
Numerical Investigation of Continuous-Domain Lp-norm Regularization in Generalized Interpolation
Pakshal Bohra
Meeting • 19 February 2019 • 00302
Abstract: The aim of this project is to understand the effect of continuous-domain Lp-norm regularization (when 1 < p < 2). To that end, we study the Lp-regularized generalized interpolation problem. In order to numerically solve this continuous-domain problem, we propose a spline-based discretization scheme which leads to an exact discretization. The resulting discrete problem can be solved efficiently by using existing optimization methods. We then present some numerical results which help us in understanding the behaviour of the Lp-regularized solution.
Deep Learning for Non-Linear Inverse Problems
Fangshu Yang
Meeting • 02 April 2019 • 00303
Abstract: The aim of this talk is to introduce the applications of deep learning for non-linear inverse problems. It includes a short talk on seismic imaging and a detailed discussion related to diffraction imaging. In order to overcome the limitations of conventional methods for solving ill-posed non-linear inverse problems, we propose to utilize deep learning as a tool or a regularizer to obtain high-resolution results. To that end, the numerical experiments present the performance of this method.
Measure Digital, Reconstruct Analog
Julien Fageot, Harvard Harvard John A. Paulson School of Engineering and Applied Sciences
Seminar • 16 April 2019 • 00304
Abstract: In the last years, at BIG, some efforts have been made to understand regularization methods for the reconstruction of analog signals from finitely many linear measurements. I will sum up what has been done so far - from pure theory to practical algorithms - and present some new challenges.
Cell detection by functional inverse diffusion and non-negative group sparsity
Pol del Aguila Pla, KTH Royal Institute of Technology
Seminar • 07 May 2019 • 00305
Abstract: Image-based immunoassays are designed to estimate the proportion of biological cells in a sample that generate a specific kind of particles. These assays are instrumental in biochemical, pharmacological and medical research, and have applications in disease diagnosis. In this talk, I describe the model, inverse problem, functional optimization framework, and algorithmic solution to analyze image-based immunoassays that we presented in [1] and [2]. In particular, I will delve into 1) the radiation-diffusion-adsorption-desorption partial differential equation and a re-parametrization of its solution in terms of convolutional operators, 2) the set up, analysis and algorithmic solution of an optimization problem in Hilbert spaces to recover spatio-temporal information from a single image observation, and 3) the derivation of the proximal operator in function spaces for the non-negative group-sparsity regularizer. After discretization, our work results in a convergent, high-performing algorithm with 25 million optimization variables that requires the entire engineering toolbox of tips and tricks, and was recently incorporated in a commercial product [3]. If time allows, I will introduce our work in [4], in which we use the structure of our algorithm to learn a faster, approximated solver for our optimization problem. [1]: Pol del Aguila Pla and Joakim Jaldén, "Cell detection by functional inverse diffusion and non-negative group sparsity Part I: Modeling and Inverse Problems", IEEE Transactions on Signal Processing, vol. 66, no. 20, pp. 5407-5421, 2018. Access at: https://doi.org/10.1109/TSP.2018.2868258 [2]: Pol del Aguila Pla and Joakim Jaldén, "Cell detection by functional inverse diffusion and non-negative group sparsityPart II: Proximal optimization and Performance Evaluation", IEEE Transactions on Signal Processing, vol. 66, no. 20, pp. 5422-5437, 2018. Access at: https://doi.org/10.1109/TSP.2018.2868256 [3]: Mabtech Iris reader. See product page: https://www.mabtech.com/iris [4]: Pol del Aguila Pla, Vidit Saxena, and Joakim Jaldén, "SpotNet Learned iterations for cell detection in image-based immunoassays", 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). Access at: https://arxiv.org/abs/1810.06132
Can neural networks always be trained? On the boundaries of deep learning
Matt J. Colbrook, University of Cambridge
Seminar • 06 May 2019 • 00306
Abstract: Deep learning has emerged as a competitive new tool in image reconstruction. However, recent results demonstrate such methods are typically highly unstable tiny, almost undetectable perturbations cause severe artefacts in the reconstruction, a major concern in practice. This is paradoxical given the existence of stable state-of-the-art methods for these problems. Thus, approximation theoretical results non-constructively imply the existence of stable and accurate neural networks. Hence the fundamental question: Can we explicitly construct/train stable and accurate neural networks for image reconstruction? I will discuss two results in this direction. The first is a negative result, saying such constructions are in general impossible, even given access to the solutions of common optimisation algorithms such as basis pursuit. The second is a positive result, saying that under sparsity assumptions, such neural networks can be constructed. These neural networks are stable and theoretically competitive with state-of-the-art results from other methods. Numerical examples of competitive performance are also provided.
Total variation minimization through Domain Decomposition
Vasiliki Stergiopoulou
Meeting • 28 May 2019 • 00307
Abstract: Due to new modalities and recent advantages in imaging, the size of the data is constantly increasing and the current reconstruction methods are limited by memory issues. The goal of this project is to introduce domain decomposition methods, which allow us to reduce the reconstruction problem to a sequence of smaller sub-problems of a more manageable size. We propose two methods for total variation minimization, one with non-overlapping subdomains and another with overlapping subdomains. Finally, as the convergence of both methods is slow, we suggest a multigrid approach for a better initialization and a faster convergence.
Optimal Spline Generators for Derivative Sampling
Shayan Aziznejad
Meeting • 18 June 2019 • 00308
Abstract: The goal of derivative sampling is to reconstruct a signal from the samples of the function and of its first-order derivative. In this talk, we consider this problem over a shift- invariant reconstruction subspace generated by two compact-support functions. We assume that the reconstruction subspace reproduces polynomials up to a certain degree. We then derive a lower bound on the sum of supports of its generators. Finally, we illustrate the tightness of our bound with some examples.
Multiple Kernel Regression with Sparsity Constraints
Shayan Aziznejad
Meeting • 18 June 2019 • 00309
Abstract: We consider the problem of learning a function from a sequence of its noisy samples in a continuous-domain hybrid search space. We adopt the generalized total-variation norm as a sparsity-promoting regularization term to make the problem well-posed. We prove that the solution of this problem admits a sparse kernel expansion with adaptive positions. We also show that the sparsity of the solution is upper-bounded by the number of data points. This allows for an enlargement of the search space and ensures the well-posedness of the problem.
An Introduction to Convolutional Neural Networks for Inverse Problems in Imaging
Harshit Gupta
Test Run • 09 July 2019 • 00310
Abstract: Between 2011 and 2017, errors rates on the ImageNet Large-Scale Visual Recognition Challenge dropped from 25.8% to 2.25%; this improvement was driven by the development of convolutional neural networks (CNNs). Now, a plethora of CNN-based approaches are being applied to inverse problems in imaging. Should we expect the same dramatic improvements here? In this talk, I will survey some of the progress so far, including our recent work on X-ray computed tomography reconstruction.
Duality and Uniqueness for the gTV problem.
Quentin Denoyelle
Meeting • 23 July 2019 • 00311
Abstract: I will present the basic ideas and the intuition behind the duality theory in convex optimization and how it can be useful to characterize the solutions of the gTV problem.
The Interpolation Problem with TV(2) Regularization
Thomas Debarre
Meeting • 30 July 2019 • 00312
Abstract: In this talk, we will study the 1D interpolation problem with TV(2) regularization using the tools of duality theory. We first present a complete description of the solution set. More precisely, we provide a characterization for unicity, and give the form of all the solutions when it is not, including the (possibly infinite) sparsest solutions. We then present an algorithm to solve the penalized interpolation problem, which produces one of the sparsest solutions of the problem given the data points and a regularization parameter.
Deep Learning for Magnetic Resonance Image Reconstruction and Analysis
Chen Qin
Seminar • 06 August 2019 • 00313
Abstract: Recent advances in deep learning have shown great potentials in improving the entire medical imaging pipeline, from image acquisition and reconstruction to disease diagnosis. In this talk, I will mainly focus on Magnetic Resonance (MR) image reconstruction and analysis. Firstly, I will introduce my recent study on dynamic MR image reconstruction from highly undersampled k-space data. A CRNN (convolutional recurrent neural network) model will be presented where it models the traditional iterative optimisation process in a learning setting and is able to exploit the spatio-temporal redundancies effectively and efficiently. As a complement, a k-t NEXT (k-t Network with X-f Transform) method will be introduced in which image signals are recovered by alternating the reconstruction process between x-f space and image space in an iterative fashion. Secondly, I will briefly present our recent research on parallel MRI reconstruction, where variable splitting idea is adopted and is modeled in a deep learning framework. Besides, some works on MRI analysis directly from undersampled data will also be presented, including cardiac segmentation and motion estimation, where we showed that prediction directly from undersampled MRI can still achieve accurate performance, potentially enabling fast analysis for MR imaging.
Multivariate Haar wavelets and B-splines
Tanya Zaitseva
Meeting • 13 August 2019 • 00314
Abstract: Every day we need to store lots of information: images, audio, video, etc. One efficient way to do this is to use wavelets. The Haar system on the real line is the simplest example of wavelets. Multidimensional Haar system can be constructed as a direct product of one-dimensional systems, although, these systems have a big amount of corresponding generating mother wavelet functions. That is why it is more effective to construct Haar multivariate functions using an arbitrary integer dilation matrix M. In this case the support of wavelet scaling function is some special fractal set. I will present a classification of all Haar systems with only one generating function and discuss their Holder regularity, it is a useful characteristic of wavelet systems. The big Holder regularity implies the fast convergence of the corresponding partial sums of wavelet expansions, that is also important in practice. If we convolve several wavelet scaling functions, we will get multivariate B-splines. It turns out that multivariate B-splines which are based on these fractal sets have higher Holder regularity than, for example, famous rectangular B-splines. So, they approximate functions better. These splines can be also used to construct the Battle-Lemarie spline wavelets.
Sparse signal reconstruction using variational methods with fractional derivatives
Stefan Stojanovic
Meeting • 10 September 2019 • 00315
Abstract: Self-similar stochastic processes have many applications in signal processing (image analysis, speech synthesis, road/Ethernet traffic modeling...). These signals of varying sparsity can be reconstructed efficiently using variational methods. Since fractional derivatives are whitening operators for these processes, we formulate a continous inverse problem with gTV regularization and generalized fractional derivatives as regularization operators. We then discretize this problem in the basis of periodic fractional B-splines, and propose an algorithm to solve this discretized problem in an exact way.
Generating Sparse Stochastic Processes
Leello Tadesse Dadi
Meeting • 24 September 2019 • 00316
Abstract: The sparse stochastic process framework allows one to model signals as solutions of stochastic differential equations of the form Ls = w. We will see than any signal modelled in this way can be approximated by computer friendly signals that solve the same differential equation. We will then propose an efficient method for generating these computer friendly signals so practitioners can simulate arbitrarily close approximations of s when Ls=w.
Efficient methods for solving large scale inverse problems
Eran Treister, Computer Science Department at Ben Gurion University of the Negev, Beer Sheva, Israel
Seminar • 17 October 2019 • 00317
Multigrid Methods for Helmholtz equation and its application in Optical Diffraction Tomography
Tao Hong, Department of Computer Science, Technion Israel Institute of Technology
Meeting • 05 November 2019 • 00318
Lagrangian Tracking of Bubbles Entrained by a Plunging Jet
Alexis Goujon
Meeting • 19 November 2019 • 00319
Abstract: A liquid jet plunging in a pool of the same liquid may entrain air in the form of bubbles. This process has received much interest in the past due to the fundamental fluid mechanics problems it covers and to its numerous applications, especially in bubble-mediated gas exchange. A study of plunging jets was undertaken focusing on low enough air entrainment rate to enable the individual tracking of bubbles. This Lagrangian point of view gives access to the bubbles' trajectories as well as their residence time, a crucial quantity for gas exchange. While individual bubble motion is essentially random (owing to turbulence), average quantities were found to behave coherently with respect to the jet's parameters. The project comprises experimental work, image processing, bubble tracking and numerical simulations. The presentation will place a special emphasis on the tracking algorithm specially designed for this study. https://doi.org/10.1103/APS.DFD.2018.GFM.V0073
About the use of non-imaging data to improve domain adaptation for spinal cord segmentation on MRI
Benoît Sauty De Chalon
Meeting • 26 November 2019 • 00320
Abstract: Currently in my last year of Master in Bioinformatics in Paris, I have come to give this presentation in order to present myself to the lab, my past work and centers of interest. If my profile fits the spirit of the lab, I hope to be able to pursue a Master's internship and then a PhD at the BIG. After a brief overview of the classes I took during my studies and the projects that are relevant to medical imaging and computer vision, I will more specifically expand on the research internship I did last year at the NeuroPoly lab in Montreal, that specializes in spinal cord MRI analysis. My task was to improve the segmentation models in order to be able to perform on data from unseen domains (new acquisition sequence, new scanner, new contrast, etc). To do this, the idea was to give a physical a priori to the model by inputing acquisition metadata along with the image and perform feature wise linear modulations to the feature maps in the segmentation CNN. I will also present some of the initiatives I took outside of this project for the lab workflow. Finally, I will present my interest for the work done at the BIG and why I would like to work there.
Adaptive regularization for three-dimensional optical diffraction tomography
Thanh-An Pham
Meeting • 17 December 2019 • 00321
Abstract: In these times of cold winter, Christmas, New Year, and others are approaching. What better reason could there be for planning a BIG meeting just before BIG Christmas? I will present you the ISBI paper we submitted. Optical diffraction tomography allows one to quantitatively measure the distribution of the refractive index of the sample. It relies on the resolution of an inverse scattering problem. Due to the limited range of views as well as optical aberrations and speckle noise, the quality of ODT reconstructions is usually better in lateral planes than in the axial direction. In this work, we propose an adaptive regularization to mitigate this issue. We first learn a dictionary from the lateral planes of an initial reconstruction that is obtained with a total- variation regularization. This dictionary is then used to enhance both the lateral and axial planes within a final reconstruction step. The proposed pipeline is validated on real data using an accurate nonlinear forward model. Comparisons with standard reconstructions are provided to show the benefit of the proposed framework.
2018
Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems
Anaïs Badoual, EPFL STI LIB
Meeting • 30 January 2018 • 00278
Abstract: This presentation deals with the resolution of inverse problems in a periodic setting or, in other terms, the reconstruction of periodic continuous-domain signals from their noisy measurements. We focus on two reconstruction paradigms. In the variational approach, the reconstructed signal is solution of an optimization problem that combines the fidelity to the data and imposes some smoothness conditions via a quadratic regularization associated to a linear operator. In the statistical approach, the signal is modeled as a stationary random process defined from a Gaussian white noise and a whitening operator. One then looks for the optimal estimator in the mean-square sense. For the two approaches, we give a generic form of the reconstructed signals for a broad class of problems. The specificity of this work is to provide a very general analysis and comparison of the two approaches for periodic signals.
Sparsity-based techniques for diffraction tomography
Thanh-an Pham, EPFL STI LIB
Meeting • 27 February 2018 • 00279
Abstract: Optical diffraction tomography (ODT) relies on solving an inverse scattering problem governed by the wave equation. Classical reconstruction algorithms are based on linear approximations of the forward model (Born or Rytov), which limits their applicability to thin samples with low refractive-index (RI) contrasts. In the ODT setting, the measurements are complex (amplitude and phase). However, because in some applications the phase of the scattered field cannot be measured, it is of interest to reconstruct the RI from intensity measurements. In the first part of my talk, I will present our recent work that have shown the benefit of adopting nonlinear models in the ODT setting (complex measurements). They account for multiple scattering and reflections, improving the quality of reconstruction. I will then present another recent work that proposed a reconstruction framework to obtain RI maps from intensity-only measurements.
Structured Illumination and the Analysis of Single Molecules in Cells
Rainer Heintzmann, Institute of Photonic Technology,Jena, Germany
Seminar • 09 February 2018 • 00280
Abstract: In the past decade revolutionary advances have been made in the field of microscopy imaging, some of which have been honoured by the Nobel prize in Chemistry 2014. One high-resolution method is based on transforming conventionally unresolvable details into measurable patterns with the help of an effect most people have already personally experienced: the Moiré effect. If two fine periodic patterns overlap, coarse patterns emerge. This is typically seen on a finely weaved curtain folding back onto itself. Another example is fast moving coarse patterns on both fences of a bridge above a motorway, when approaching it with the car. The microscopy method of structured illumination utilizes this effect by projecting a fine grating onto the sample and imaging the resulting coarser Moiré patterns containing the information about invisibly fine sample detail. With the help of computer reconstruction based on several such Moiré images, a high-resolution image of the sample can then be assembled. Another way to obtain a high-resolution map of the sample is to utilize the blinking behaviour inherent in most molecules, used to stain the sample. Recent methodological advances (Cox et al., Nature Methods 9, 195-200, 2012) enable us to create pointillist high-resolution maps of molecular locations in a living biological sample, even if in each of the required many individual images, these molecules are not individually discernible. Examples will be shown as a film of a cell at 30 millionths of millimeter resolution and 6 seconds between the individual movie frames.
Direct Reconstruction of Clipped Peaks in Bandlimited OFDM Signals
Kyong Hwan Jin, EPFL STI LIB
Meeting • 13 March 2018 • 00281
Abstract: The high Peak-to-Average Power Ratio (PAPR) of transmitted signal is the challenging issue in Orthogonal Frequency Division Multiplexing (OFDM) system. It causes significant errors of transmitted bits due to the non-linearity of amplifiers and receivers. One approach to reduce PAPR is amplitude clipping of peaks. Clipping enables OFDM system to reduce PAPR, but causes band distortions leading to the information loss of transmission. Here, we propose a reconstruction algorithm for bandlimited OFDM signals after clipping without additional transmitted bits. We formulate a minimum-norm reconstruction with sinc-related basis to interpolate clipped peaks. The reconstruction consists of two steps: regression for reproducing coefficients of reproducing kernels and interpolation with them to obtain values of clipped peaks. Because the reconstruction is performed based on a matrix inversion and an interpolation, the proposed algorithm becomes non-iterative. Thus, the proposed method is free from tuning parameters and hardware friendly in the modern high-throughput digital communication receivers. Our experiment on realistic simulation shows that the reconstruction of clipped peaks can significantly reduce transmission errors.
Fast Multiresolution Reconstruction for Cryo-EM
Laurène Donati, EPFL STI LIB
Meeting • 17 April 2018 • 00282
Abstract: We present a multiresolution reconstruction framework for single-particle analysis (SPA). The representation of 3D objects with scaled basis functions permits the reconstruction of volumes at any desired scale in the real-space. With this tool, one can now select a level of coarseness for the reconstruction that is well adapted to the current under-determination of the measurements during the refinement procedure (e.g., few projection angles). In particular, we show that reconstruction performed at coarse scale is more robust to error on angles and permits gains in computational speed. A key component of the proposed multiresolution scheme is its fast implementation. The costly step of the reconstruction which was previously hindering the use of advanced iterative methods in SPA is formulated as a discrete convolution with cost independent upon the number of projection directions. The inclusion of the CTF inside the imaging matrix is also done at no extra computational cost. Finally, by permitting full 3D regularisation, the framework is by itself a robust alternative to direct methods for performing reconstruction in adverse imaging conditions.
Hybrid spline dictionaries for continuous-domain inverse problems
Thomas Debarre, EPFL STI LIB
Meeting • 24 April 2018 • 00283
Abstract: We study 1D continuous-domain inverse problems with multiple gTV (generalized Total-Variation) regularization (i.e. several different regularization operators). This work is based on a continuous-representer theorem, which states that such inverse problems have hybrid spline solutions. The total sparsity of these hybrid splines is bounded by the number of measurement. We show that such continuous-domain problems can be discretized in an exact way by using a union of B-spline dictionaries matched to the regularization operators. We then propose a multiresolution algorithm which selects an appropriate grid size depending on the problem at hand. Finally, we demonstrate the computational feasibility of our algorithm for multiple order derivative regularization operators.
Fast rotational dictionary learning using steerability
Mike McCann, EPFL STI LIB
Meeting • 08 May 2018 • 00284
Abstract: In this talk, I will present work I have done with Adrien on rotational (also called "rotation-invariant" or "rotational-equivariant") sparse dictionary learning (DL). Starting with a set of training data, the goal of DL is to find a dictionary comprised of elements (called "atoms"), such that each element of the training data can be well-approximated by a linear combination of a small number of atoms. In image processing applications, these linear dictionaries struggle to capture the rotational and translational relationships between patches, resulting in dictionaries with low approximation power and a lack of specificity. These problems can be addressed by explicitly accounting for these transformations in the problem formulation, but the resulting learning algorithms are usually impractically slow. Here, we present a new technique for fast rotational DL which uses a discrete steerable basis to accelerate the learning. We demonstrate the usefulness of the technique in both coding and texture classification.
Influence of spatial context over color perception: unifying chromatic assimilation and simultaneous contrast into a neural field model
Anna Song, EPFL STI LIB
Meeting • 22 May 2018 • 00285
Abstract: We propose a neural field model of color perception. This model reconciles into a common framework two apparently contradictory perceptual phenomena, simultaneous contrast and chromatic assimilation. Previous works showed that they act simultaneously and can produce larger shifts in color comparison matching when acting synergistically with a spatially oscillating pattern. These perceptual chromatic shifts are expressed in s-coordinates of a cone-based chromaticity space. It is suggested that, at some spatial location of an image viewed by a human observer, the color sensation elicited at this point tends to be perceptually attracted towards colors of spatially neighboring points, while being repelled by colors of farther points towards their respective opponents, and that these opposing actions occur at slightly different spatial scales, which allows to combine them. However, a linear receptive field model using a simple convolution to predict color shift is not sufficient to explain the dependency of the shift on the initial chromatic coordinates of the test color. We introduce a neural field model, in which a first order integro-differential equation regulates the evolution of neural activities in the cortical hypercolumns assumed to code for colors. We first recall the mathematical definition of colors. We also suppose the proper definition of a good opponent space. In order to perform the mathematical analysis of the model we make several simplifying assumptions. The connectivity kernel is assumed to be separable into the color and physical spaces, although this can be generalized to sums of separable kernels. As a first approximation, we also suppose that the three chromatic channels do not interact. Our model depends on a number of perceptually meaningful parameters. We study the bifurcations of its solutions around stationary solutions. Under some symmetry and periodicity hypotheses, we can predict, using equivariant bifurcation theory, the appearance of visual patterns called planforms, which can be interpreted as color hallucinations. Future psychophysical experiments, may confirm this and support the relevance of the model. We have implemented the model in Python to simulate the evolution of the neural activities in the hypercolumns. We validate the model by fitting its parameters to real data, using the PyTorch library. We perform a multi-parameter regression to the data. The results should show that our model is capable of explaining both contrast and assimilation, and that the optimal parameters vary across human subjects according to their perceptual biases.
Minimum Support Multi-Splines
Shayan Aziznejad, EPFL STI LIB
Meeting • 20 November 2018 • 00286
Abstract: In this talk, I will present Alirezas work during his internship in summer 2018. He focussed on the optimal generators of hybrid spline spaces. The optimality will be considered in two senses: 1) the minimum number of generators, and 2) the minimum support size. After presenting our optimality results, I will present useful classes of optimal interpolators (in both senses) that can be used in interlaced and generalized sampling.
Subdivision-Based Active Contours --- Statistical optimality of Hermite splines for the reconstruction of self-similar signals --- The Role of Discretisation in X-Ray CT Reconstruction
Anaïs Badoual, Virginie Uhlmann, Michael McCann, EPFL STI LIB
Test Run • 29 May 2018 • 00287
Abstract: In this talk we present a new family of active contours by exploiting subdivision schemes. Depending on the choice of the mask, such models have the ability to reproduce trigonometric or polynomial curves. They can also be designed to be interpolating, a property that is useful in user-interactive applications. Such active contours are robust in the presence of noise and to the initialization. We illustrate their use for the segmentation of bioimages. ------- Hermite splines are commonly used for interpolating data when samples of the derivative are available, in a scheme called Hermite interpolation. Assuming a suitable statistical model, we demonstrate that this method is optimal for reconstructing random signals in Papoulis generalized sampling framework. More precisely, we show the equivalence between cubic Hermite interpolation and the linear minimum mean-square error (LMMSE) estimation of a second-order Lévy process. ------- Discretizationrepresenting a continuous-time function or operation with a discrete-time oneis unavoidable in solving inverse problems. In X-ray computed tomography (CT) reconstruction, the classical algorithm handles discretization "at the end". Modern approaches discretize "in the middle" or "at the beginning". In this talk, I will show how the latter provides algorithms that are mathematically rigorous and implementable. I will also discuss the choice of the basis function among pixels, B-splines, box splines, Kaiser-Bessel windows, and sincs.
Theoretical and Numerical Analysis of Super-Resolution without Grid
Quentin Denoyelle, Université Paris Dauphine
Meeting • 19 June 2018 • 00288
Abstract: We study the noisy sparse spikes super-resolution problem for positive measures using the BLASSO, an infinite dimensional convex optimization problem generalizing the LASSO to measures. First, we show that the support stability of the BLASSO for N clustered spikes is governed by an object called the (2N-1)-vanishing derivatives pre-certificate. When it is non-degenerate, solving the BLASSO leads to exact support recovery of the initial measure, in a low noise regime whose size is controlled by the minimal separation distance of the spikes. Then, we propose the Sliding Frank-Wolfe algorithm, based on the Frank-Wolfe algorithm with an added step moving continuously the amplitudes and positions of the spikes, that solves the BLASSO. We show that, under mild assumptions, it converges in a finite number of iterations. We apply this algorithm to the 3D fluorescent microscopy problem by comparing three models based on the PALM/STORM technics.
Computational Super-Sectioning for Single-Slice Structured-Illumination Microscopy
Emmanuel Soubies, EPFL STI LIB
Meeting • 19 June 2018 • 00289
Abstract: While structured-illumination microscopy (SIM) is inherently a 3D technique, many biological questions can be addressed from the acquisition of a single focal plane with high lateral resolution. Unfortunately, the single-slice reconstruction of thick samples suffers from defocusing. This work describes an improved reconstruction method for 2D SIM measurements while relying on a 3D model. It enables the estimation of the out-of-focus signal of additional planes and improves the quality of the reconstruction. Given a single 2D acquisition, we are able to produce a reconstructed slice with a quality comparable to what would have been obtained, had a full 3D stack been acquired and reconstructed. The proposed algorithm relies on a specific formulation of the optimization problem together with the derivation of computationally efficient proximal operators. These developments allow us to deploy of an efficient inner-loop-free alternating-direction method of multipliers.
Local Rotation Invariance and Directional Sensitivity of 3D Texture Operators: Comparing Classical Radiomics, CNNs and Spherical Harmonics
Adrien Depeursinge, EPFL STI LIB
Meeting • 26 June 2018 • 00290
Abstract: We define and investigate the Local Rotation Invariance (LRI) and Directional Sensitivity (DS) of radiomics features. Most of the classical features cannot combine the two properties, which are antagonist in simple designs. We propose texture operators based on spherical harmonic wavelets (SHW) invariants and show that they are both LRI and DS. An experimental comparison of SHW, popular radiomics operators and O-group equivariant Convolutional Neural Networks (CNNs) for classifying 3D textures reveals the importance of combining the two properties for optimal pattern characterization.
A L1 representer theorem of vector-valued learning
Shayan Aziznejad, EPFL STI LIB
Meeting • 17 July 2018 • 00291
Abstract: In this talk, we present a theoretical study on the problem of learning a vector-valued function using generalized TV regularization. We propose a representer theorem that describes the solution set for this problem. Our representer theorem is based on the notion of non-uniform vector-valued L-splines which we introduce and study. At the end, we mention several applications of our representer theorem that can be used.
Variational Framework for Continuous Angular Refinement and Reconstruction in Cryo-EM
Mona Zehni, EPFL STI LIB
Meeting • 14 August 2018 • 00292
Abstract: In the field of single particle reconstruction (SPR) in Cryo-electron microscopy (Cryo-EM), the target is to recover a high resolution density map of the molecule from a set of particle images. In this imaging modality, each particle image corresponds to the X-ray transform of the molecule from an unknown 3D pose corrupted by contrast transfer function (CTF) of the microscope and noise. In this presentation, we describe a variational framework that performs a joint optimization of the density map and the underlying angular variables. We solve this joint optimization problem by taking alternating ADMM and gradient descent steps to update the density map and 3D pose variables iteratively. Note that, our method serves as a 3D refinement step in the whole Cryo-EM pipeline. Thus, we start from an initial map and some rough estimations of the 3D poses and then refine both gradually. In our framework, unlike the current state of the art reconstruction techniques, we resolve the 3D pose variables on the continuum. Meaning that rather than descretizing the space of 3D poses, we represent these variables in their continuous form. This enables us to perform gradient steps on the angular variables in order to minimize the cost function. Thus, rather than comparing each particle image with a set of template projections of the density map taken from discretized points on the sphere, which is the essence of projection matching methods, we perform gradient steps in order to update these variables. Our preliminary results indicate that starting from a coarse estimation of the map and the angles, refinement of both is achievable.
Looking beyond Pixels: Theory, Algorithms and Applications of Continuous Sparse Recovery
Hanjie Pan, Audiovisual Communications Laboratory (LCAV, EPFL
Seminar • 07 August 2018 • 00293
Abstract: Sparse recovery is a powerful tool that plays a central role in many applications. Conventional approaches usually resort to discretization, where the sparse signals are estimated on a pre-defined grid. However, the sparse signals do not line up conveniently on any grid in reality. We propose a continuous-domain sparse recovery technique by generalizing the finite rate of innovation (FRI) sampling framework to cases with non-uniform measurements. We achieve this by identifying a set of unknown uniform sinusoidal samples (which are related to the sparse signal parameters to be estimated) and the linear transformation that links the uniform samples of sinusoids to the measurements. It is shown that the continuous-domain sparsity constraint can be equivalently enforced with a discrete convolution equation of these sinusoidal samples. Then, the sparse signal is reconstructed by minimizing the fitting error between the given and the re-synthesized measurements (based on the estimated sparse signal parameters) subject to the sparsity constraint. Further, we develop a multi-dimensional sampling framework for Diracs in two or higher dimensions with linear sample complexity. This is a significant improvement over previous methods, which have a complexity that increases exponentially with space dimension. An efficient algorithm has been proposed to find a valid solution to the continuous-domain sparse recovery problem such that the reconstruction (i) satisfies the sparsity constraint; and (ii) fits the given measurements (up to the noise level). We validate the flexibility and robustness of the FRI-based continuous-domain sparse recovery in both simulations and experiments with real data in radioastronomy, acoustics and microscopy.
Complex-order scale-invariant operators and self-similar processes
Arash Amini, Sharif University, Tehran, Iran
Meeting • 21 August 2018 • 00294
Abstract: The scale-invariant operators are those that translate input dilation into the same dilation of the output. Typical examples are the ordinary derivative operators. It is interesting that the complete family of linear shift-invariant operators (filters) that are also scale-invariant is known. The family could be considered as the generalisation of the nth-order derivation to zth-order derivation where z is a complex number. In this talk, I will introduce this family, their inverse and adjoint operators, and their properties. Finally, I will discuss about how these operators can be applied to certain white noise processes to generate self-similar processes.
Analysis of Planar Shapes through Shape Dictionary Learning with an Extension to Splines
Anna Song, EPFL STI LIB
Meeting • 28 August 2018 • 00295
Abstract: C. Elegans worm is a model organism extensively studied by biologists, easy to observe and manipulate. It is the only animal up to now whose connectome is entirely known. An active research field aims at linking its genetic material to motor behavior: indeed, investigating their swarming or swimming movements is the simplest way to observe some change due to a mutation. In such a context, we propose to analyse their shapes using Kendall's celebrated shape space theory (1977). Using Hermite splines or landmarks to segment worms, we extend his theory to spline curves, especially in the planar case where the complex setting is incredibly clear and simple. This allows us to embed some of the lab's previous work into this more general framework. Our main contribution is to introduce a sparse Shape Dictionary which is able to reconstruct any worm with few atoms. Most importantly, our atoms resemble realistic worms. This leads to interpretable weights, and two complementary ways for visualizing the dynamics of a worm. We hope that this method will be useful in the future for linking movement features to mutations, and for other living organisms as well.
PSF-Extractor: from fluorescent beads measurements to continuous PSF model
Emmanuel Soubies, EPFL STI LIB
Meeting • 11 September 2018 • 00296
Abstract: I will present some results related to an ongoing work aiming at estimating a continuous point spread function (PSF) model from fluorescent beads measurements. In contrast to traditional methods, the proposed approach takes into account the real size of the beads (i.e. no point source assumption) and jointly estimates the beads positions (grid free) and the continuous PSF model. This presentation will also be an opportunity to present the last developments I have done in the library to deal properly with this kind of joint optimization approaches.
Adversarially-Sandwiched VAEs for Inverse Problems
Harshit Gupta, EPFL STI LIB
Meeting • 02 October 2018 • 00297
Abstract: One of the main challenges of inverse problems is modelling (or learning) the data prior. Recently, neural-network-based generative modelling have shown impressive ability to model (or estimate) this data distribution. These methods use latent-variable-based parametrisation of the estimated distribution which is useful for real-world signals. In this talk we will first briefly discuss the two pillars of generative modelling: Generative Adversarial Netwoks (GANs) and Variational Autoencoder (VAE). In GANs, a generator is used to generate samples from latent variables and a discriminator is trained to differentiate these generated fake samples from the real samples. Meanwhile, the generator is trained to produce real looking samples so as to fool the discriminator. This method is equivalent to minimising the Jensen-Shanon Divergence (JSD) between the actual and the estimated distribution. However, GANs have many problems: they are hard to train, they lack encoding architecture to produce latent representation of the data, and more importantly they do not explicitly give the estimated likelihood of the data. VAEs are encoder-decoder networks which are much easier to train and which explicitly estimate a lower bound on the likelihood. They are trained by maximising the lower bound of the estimated log-likelihood of the data. This is equivalent to minimising the Kullback-Leibler Divergence (KLD) between the actual and the estimated distribution. However, KLD, unlike JSD, is an unsymmetric type of divergence and may result in inferior results. Finally, I will propose a new scheme to train the VAEs, in which an upper and a lower bound of the log-likelihood are used to sandwich it. Then for a given sample from the decoder, a discriminator (or adversary) is used to decide if the estimated likelihood of the sample is higher or lower than the actual likelihood. In case of former, an upper bound of the likelihood is minimised and in case of latter a lower bound is maximized. We show that this scheme, like GANs, is equivalent to minimizing an upper bound on the JSD between the actual and the estimated distribution and reaches the global minima iff both are equal.
Sparse Coding with Projected Gradient Descent for Inverse Problems
Thanh-an Pham, EPFL STI LIB
Meeting • 23 October 2018 • 00298
Abstract: I will present a novel method for sparse coding for nonlinear inverse problem. It relies on a specific formulation of the optimisation problem allowing the use of the projected gradient descent (PGD) method. The main advantage of PGD over the conventional ADMM-based approach is that the costly step of inverting the forward model is avoided. Instead, we only require the computation of the gradient of the data fidelity term (without an explicit inversion of the forward model). We will discuss of the projection on the set of constraints as well as the convergence of the algorithm. Finally, I will present some preliminary results.
Self-Supervised Deep Active Accelerated MRI
Kyong Hwan Jin, EPFL STI LIB
Meeting • 27 November 2018 • 00299
Abstract: We propose to simultaneously learn to sample and reconstruct magnetic resonance images (MRI) to maximize the reconstruction quality given a limited sample budget, in a self-supervised setup. Unlike existing deep methods that focus only on reconstructing given data, thus being passive, we go beyond the current state of the art by considering both the data acquisition and the reconstruction process within a single deep-learning framework. As our network learns to acquire data, the network is active in nature. In order to do so, we simultaneously train two neural networks, one dedicated to reconstruction and the other to progressive sampling, each with an automatically generated supervision signal that links them together. The two supervision signals are created through Monte-Carlo tree search (MCTS). MCTS returns a better sampling pattern than what the current sampling network can give and, thus, a better final reconstruction. The sampling network is trained to mimic the MCTS results using the previous sampling network, thus being enhanced. The reconstruction network is trained to give the highest reconstruction quality, given the MCTS sampling pattern. Through this framework, we are able to train the two networks without providing any direct supervision on sampling. We test our method on multiple MRI datasets, outperforming the state of the art.
Fast PET reconstruction: the home stretch
Michael McCann
Meeting • 11 December 2018 • 00300
Abstract: In this talk, I will update the group on my efforts to design a fast algorithm for positron emission tomography (PET) based on resampling the measurements. While this problem originally looked like low-hanging fruit, it has turned out to be a challenge. I will discuss a few of my failed attempts to resample using spline, then present a simple and effective algorithm that, surprisingly, involves no resampling at all. Finally, I will present my progress on finding a satisfactory explanation for why such an algorithm can work at all.
2017
BPConvNet for compressed sensing recovery in bioimaging
Kyong Jin, EPFL STI LIB
Meeting • 10 January 2017 • 00258
Abstract: Iterative reconstruction methods have become the standard approach to solving inverse problems in imaging including denoising, deconvolution, and interpolation. With the appearance of compressed sensing, our theoretical understanding of these approaches evolved further with remarkable outcomes. These advances have been particularly influential in the field of biomedical imaging, e.g., in magnetic resonance imaging (MRI) and X-ray computed tomography (CT). A more recent trend is deep learning, which has arisen as a promising framework providing state-of-the-art performance for image classification and segmentation, regression-type neural networks. In this presentation, we explore the relationship between CNNs and iterative optimization methods for one specific class of inverse problems: those where the normal operator associated with the forward model is a convolution. Based on this connection, we propose a method for solving these inverse problems by combining a fast, approximate solver with a CNN. We demonstrate the approach on low-view CT reconstruction and accelerated MRI using residual learning and multilevel learning.
A unified reconstruction framework for coherent imaging
Ferréol Soulez, EPFL STI LIB
Meeting • 24 January 2017 • 00259
Abstract: I will present a constrained framework for thin sample (2D) image reconstruction in coherent imaging. It estimates a super-resolution image of both phase and amplitude (absorption) of the sample from several intensity only measurements varying the acquisition parameters (depth, illumination angle, wavelength and detector shift). After a brief introduction to light propagation, I will present the possible forward models and their sampling requirements. After a the description of the ADMM scheme for reconstruction, i will show some results on simulations and propose an heuristic to tune the parameters. Finally I will present the work weve done with Manon Rostykus and Chris Moser lab on a lensless device.
RKHS to find the Representer Theorem for regularization operators whose null space is not finite dimensional
Harshit Gupta, EPFL STI LIB
Meeting • 09 February 2017 • 00260
Abstract: We will discuss the use of the theory of Reproducing Kernel Hilbert Spaces to find the Representer Theorem for regularization operators (a few) whose null space is not finite dimensional. Representer Theorems (RT) deal with the parametric representation of the solution of a regularized linear inverse problem. Any RT needs a proper specification of regularization and its operator, and the search space (with both regularized and unregularized components). Extending the RT to higher dimensions pose the problem of unavailabilty of operators with finite dimensional null-space (a necessary assumption). Using the theory of Reproducing Kernel in Hilbert Space can be advantageous in this sense. Using this theory helps in resolving the problem of finding the search space (native space) for a given operator even when the null space is not finite dimensional. This is done by including only a finite dimensional part of the null space in the search space. The Reproducing kernel of this space is essential in defining the search space itself and finding the parametric form of the solution.
Inverse problems and multimodality for biological imaging
Denis Fortun, EPFL STI LIB
Meeting • 28 February 2017 • 00261
Abstract: My talk will be a rehearsal of my interview for a researcher position in CNRS in France. It is strictly limited to 15 minutes and covers my previous thesis and postdoc work, and my research plan for the future. My previous works are divided into motion estimation and biological image reconstruction issues. The research plan focuses on inverse problems for biological imaging with a multimodal approach.
3D SIM and measurements time-reallocation for scanning based systems: Introduction and preliminary results on these two problems
Emmanuel Soubies, EPFL STI LIB
Meeting • 14 March 2017 • 00262
Abstract: In this presentation, I will present preliminary results on two problems. The first problem concerns super-resolution image reconstruction from 3D structured illumination microscopy acquisitions. After an introduction of this microscopy modality, I will present how we can address the inverse problem using our inverse problem library. This setup being well suited to switch from a regularizer to another one, I will then compare the efficiency of different regularizers using simulated data. The second problem deals with measurements time-reallocation of scanning based systems. Current time-lapse image processing methods assume that each frame of the recorded time-series have been acquired within the same snapshot. Such an assumption is not true for scanning based systems which can result in a loss of both spatial and temporal resolutions of the reconstructed 2/3D+t sequences and can lead to a misinterpretation of underlying biological processes. The idea is thus to develop numerical reconstruction methods that solve the reconstruction problem by correctly reallocating measurements within the space-time domain.
Multifractal analysis for signal and image classification
Stéphane Jaffard, UPEC
Seminar • 23 March 2017 • 00263
Abstract: Multifractal analysis was introduced at the end of the 1980s by physicists whose purpose was to relate global regularity indices associated to a signal (velocity of turbulent fluids) with the distribution of pointwise singularities present in the data. Several variants were later proposed, including methods based on local suprema of the continuous wavelet transform, or Detrended Fluctuation Analysis (DFA). We will focus on methods based on wavelet coefficients, using an orthonormal wavelet basis. We will see how the tools supplied by multifractal analysis can be adapted to particular types of data: e.g. the use of p-leaders (local L^p norms of wavelet coefficients) vs. leaders (local suprema of wavelet coefficients) depending on the global regularity of the data, or anisotropic wavelet transforms for the analysis of anisotropic textures. We will also see and how to adapt the analysis when the data do not present a global self-similarity. These ideas will be illustrated by a wide range of examples, such as : (in 1D) turbulence, internet traffic, heart-beat intervals, literary texts, and (in 2D) natural images, paintings and photographic papers. As regards literary texts and paintings, we will see that parameters originating from multifractal tools can give rise to new methods in textometry and stylometry.
Chasing Mycobacteria
Virginie Uhlmann, EPFL STI LIB
Meeting • 10 April 2017 • 00264
Abstract: Once upon a time, a bachelor student started working on the problem of automating the analysis of time lapse sequences of mycobacteria as a summer job. Seven years, million lines of code and several sleepless nights later, two final-year PhD students are seriously planning a celebration party as the first results are getting out. To properly segment and track the bugs, a gigantic pipeline mixing image processing, graph theory, integer programming and machine learning will have been required. This talk recounts the story of this epic chase.
Compressed Sensing for Dose Reduction in STEM Tomography
Laurène Donati, EPFL STI LIB
Test Run • 11 April 2017 • 00265
Abstract: We designed a complete acquisition-reconstruction frame-work to reduce the radiation dosage in 3D scanning transmission electron microscopy (STEM). Projection measurements are acquired by randomly scanning a subset of pixels at every tilt-view (i.e., random-beam STEM or RB-STEM). High-quality images are then recovered from the randomly down-sampled measurements through a regularized tomographic reconstruction framework. By fulfilling the compressed sensing requirements, the proposed approach improves the reconstruction of heavily-downsampled RB-STEM measurements over the current state-of-the-art technique. This development opens new perspectives in the search for methods permitting lower-dose 3D STEM imaging of electron-sensitive samples without degrading the quality of the reconstructed volume. A Matlab code implementing the proposed reconstruction algorithm has been made available online.
Optical Diffraction Tomography: Principles and Algorithms
Thanh-an Pham, EPFL STI LIB
Meeting • 09 May 2017 • 00266
Abstract: In this presentation, I will present preliminary results of my work on the Optical Diffraction Tomography~(ODT) which aims to reconstruct the Refractive Index distribution of the studied medium. The acquisition setting illuminates the sample with a controlled incident field and measures the scattered field. Two types of scenarios are currently considered. The first one recovers the complex field by interferometry, whereas the second one only uses the intensity measurements. After an introduction of the principles of ODT, I will introduce several measurement models describing the wave behaviour at different levels of approximations and will compare their reconstruction performances through 2D numerical simulation in the first scenario. After the description of the problem formulation for the second scenario, I will compare the reconstruction performance between the two scenarios through 2D numerical simulation.
Lipid membranes and surface reconstruction - a biologically inspired method for 3D segmentation
Nicolas Chiaruttini, University of Geneva
Seminar • 16 May 2017 • 00267
Abstract: We present a direct 3D surface segmentation method, inspired from lipid membrane physics. Contrary to level-set or mesh-based methods, the segmented surface is defined by a set of independent lipid particles that have a position and a normal vector. "Lipids", that are also called surfels, exert a force along their normal vector to adapt to the underlying 3D image (data attachment term). Surface integrity is maintained by local surfels interactions which also allow for topological changes (surface merging or splitting). Segmentation of multiple self-excluding volumes is easily implemented by keeping only repulsive terms between different classes of surfels. We implemented this method into a scriptable ImageJ plugin, and parallelized time critical steps with CUDA. Using a standard desktop computer, we report the segmentation of - 3D tissue from confocal images (~ 800 cells) - human brain surface from MRI sections - and Endoplasmic Reticulum from FIB-SEM data. With a standard desktop computer, this method converges within minutes for ~500k particles. (demo video : https://www.youtube.com/watch?v=TBODq6dVczM).
First steps toward fast PET reconstruction
Mike McCann, EPFL STI LIB
Meeting • 30 May 2017 • 00268
Abstract: In this talk, I will describe my recent work on positron emission tomography (PET) reconstruction. PET is a medical imaging modality that can be used to observe metabolic processes in vivo. As with X-ray CT, the measured data correspond to line integrals of the volume to be reconstructed. However, these measurement do not fall on a regular grid, but rather, a complicated pattern determined by the geometry of the scanner. I will give a short introduction to the physics and geometry of PET, then discuss my efforts to resample PET data into a form amenable to the fast algorithms we developed for parallel-ray CT.
Fractional Integral transforms and Time-Frequency Representations
Prof. Ahmed I. Zayed, Department of Mathematical Sciences DePaul University
Seminar • 02 June 2017 • BM1130 • 00269
Abstract: In the last twenty years or so a number of fractional integral transforms have been introduced, some are purely mathematical and some have practical applications, especially in signal processing and optics. In this talk we introduce some of these fractional integral transforms, in particular, fractional Fourier transforms and their properties and then discuss their relationship with time-frequency representations, such as Radar Ambiguity functions, Wigner distributions and wavelets.
Exact Discretization of Continuous-Domain Linear Inverse Problems with Generalized TV Regularization Using B-Splines
Thomas Debarre, EPFL STI LIB
Meeting • 24 August 2017 • 00270
Abstract: We study continuous-domain linear inverse problems with generalized Total-Variation (gTV), expressed in terms of a regularization operator L. It has recently been proved that such inverse problems have sparse spline solutions, with fewer coefficients than the number of measurements. Moreover, the type of spline solely depends on L (L-splines), and is independent of the measurements. For computational feasibility, the continuous-domain inverse problem can be recast as a discrete, finite-dimensional problem by enforcing the spline knots to be located on a grid. However, expressing the L-spline coefficients in the dictionary basis of the Green's function of L is ill-suited for practical problems due to its infinite support. Instead, we propose to formulate the problem in the B-spline dictionary basis, which leads to better-conditioned system matrices. We therefore define a discrete linear inverse problem in the B-spline basis and propose an algorithmic scheme to compute its sparse solutions. We demonstrate that the latter is computationally feasible for 1D signals when L is an ordinary differential operator.
GlobalBioIm Lib - v2: new tools, more flexibility, and improved composition rules.
Emmanuel Soubies, EPFL STI LIB
Meeting • 03 October 2017 • 00271
Abstract: In this talk I will present the new version of our inverse problem library. I will explain the new functionalities from both users and developpers points of views I will also present the documentation where you can find all the details (and much more !) that I will give during the meeting. Finally, I will show an example of use for 3D deconvolution.
Deep learning based data manifold projection - a new regularization for inverse problems
Harshit Gupta, EPFL STI LIB
Meeting • 17 October 2017 • 00272
Abstract: In this talk, I will present CNN-based projection on the data manifold as a new regularization scheme. Classical and iterative algorithms regularize the inverse problems by imposing prior on the real world data which is not true (like smoothness by $\ell_2$-norm and sharp edges by TV-norm). This results in solutions which are increasingly far from the true solutions as the ill-posedness of the problem increases. CNNs trained as high-dimensional (image-to-image) regressors have recently been used to efficiently solve inverse problems in imaging. However, these approaches not only lack any regularization but also are not able to enforce data fidelity and therefore, are unreliable in terms of the prior they impose and the measurement consistency of their solution. I will show that our scheme is built on the framework of Projected Gradient Descent (PGD) where the projector is replaced by the trained CNN. The gradient descent enforces the data fidelity, while the CNN recursively projects the solution closer to the space of desired reconstruction images. Since the projector is replaced with a CNN, I will present a relaxed PGD, which always converges. I will discuss a simple scheme to train a CNN to act like a projector. Finally, I will present experiments on sparse view Computed Tomography (CT) reconstruction for both noiseless and noisy measurements which show an improvement over the total-variation (TV) method and a recent CNN based technique.
Fundamental computational barriers in inverse problems and the mathematics of information
Alexander Bastounis, Cambridge University
Seminar • 27 October 2017 • 00273
Abstract: Two of the most influential recent developments in applied mathematics are neural networks and compressed sensing. Compressed sensing (e.g. via basis pursuit or lasso) has seen considerable success at solving inverse problems and neural networks are rapidly becoming commonplace in everyday life with use cases ranging from self driving cars to automated music production. The observed success of these approaches would suggest that solving the underlying mathematical model on a computer is both well understood and computationally efficient. We will demonstrate that this is not the case. Instead, we show the following paradox: it is impossible to design algorithms that solve these problems to one significant figure when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. This will occur even when the input data is in many senses well conditioned and shows that every existing algorithm will fail on some simple inputs. Further analysis of the situation for neural networks leads to the following additional paradoxes of deep learning: (1) One cannot guarantee the existence of algorithms for accurately training the neural network, and (2) one can have 100% success rate on arbitrarily many test cases, yet uncountably many misclassifications on elements that are arbitrarily close to the training set. Explaining the apparent contradiction of the observed success when applying compressed sensing, lasso and neural networks to real world examples given the aforementioned non existence result will require the development of new mathematical ideas and tools. We shall explain some of these ideas and give further information on all of the above paradoxes during the talk.
Variational use of B-splines and Kernel Based Functions
Christophe Rabut, INSA Toulouse
Seminar • 27 October 2017 • 00274
Abstract: Kernel Based Functions are generalizations of spline functions and radial basis functions. These R^d to R functions are in the form f = \sum_{i=1}^n λi φ(x−xi) or \sum_{i=1}^n λi φ(x−xi)+pk(x) where φ is called the kernel, (xi)_{i=1:n} ∈ (R^d)^n are the so called centers of f, (λi)_{i=1:n} are real coefficients, and pk is some degree k polynomial. When φ is a bell shaped function meeting some property (such as, in particular \sum_{i=1}^n φ(x) = 1 for any x ∈ R^d), we write it B and call it, for short, B-spline. In this talk we present two particular uses of these Kernel Based Functions, and a property of a specific polynomial interpolation. First, hierarchical B-splines: using B-splines of different scales, and a mean square optimization, we show how to approximate scattered data with possibility of zoom on some regions, adaptively from the data. We so obtain locally tensor product functions, where the grid of the centers is finer in some regions and coarser in other regions. Second, in a CAGD aim and using modified (variational) Bézier curves or surfaces, we show that it is possible to derive B-spline curves or surfaces being closer to (or further from) the control polygon, while being in the same vectorial space. This gives more flexibility to easily derive new forms. Third we present variational polynomial interpolation, which is true polynomial interpolation of any given data, and so obtain a polynomial interpolation without the famous Runge oscillations. These interpolating polynomials converge towards the interpolating polynomial spline of the data.
Steer&Detect on Images
Julien Fageot, EPFL STI LIB
Meeting • 14 November 2017 • 00275
Abstract: With some people in the lab, we are working on the use of steerable filters to do fast and precise detections of patterns of interest present at unknown locations and orientations in an image. We aim at delivering a self-contained framework that achieves these goals, including the continuous-domain theory, the discrete-domain implementation scheme, and a user-friendly plugin. I will present these different aspects. This is ongoing work and your critical feedback will be highly appreciated.
Continuous Representations in Bioimage Analysis: a Bridge from Pixels to the Real World
Virginie Uhlmann, EPFL STI LIB
Meeting • 12 December 2017 • 00276
Abstract: Images and video sequences, either at the macro or microscopic level, are tools of choice to observe and characterize phenotypical variations. As a consequence, bioimage analysis has grown into an essentiel field of research. Recent advances in computer vision provide efficient tools which, given a set of examples, learn to predict which parts of the image hold relevant information. An obvious limitation of these methods is their discrete nature, leaving them bound to pixel grids and inherently unable to account for the fact that the real world is continuous. In this talk, we present a novel approach called landmark active contours which efficiently complement state-of-the-art pixel-based computer vision algorithms. While image acquisition turns real-world information into pixels, our method offers a way to go back from the digital to the continuous world. Landmark active contours consist in a mathematically well-defined continuous curve which uses information provided by pixel-based maps to automatically outline object in images. They simultaneously provide a segmentation algorithm and a particularly well-suited model for extracting precise quantitative information that characterize the objects. From their nature, landmark active contours are extremely flexible and can easily adapt to the wide variety of bioimages. We will describe their theoretical construction and show their use through several practical examples.
Fast Piecewise-Affine Motion Estimation Without Segmentation
Denis Fortun, EPFL STI LIB
Meeting • 19 December 2017 • 00277
Abstract: Current algorithmic approaches for piecewise affine motion estimation are based on alternating motion segmentation and estimation. We propose a new method to estimate piecewise affine motion fields directly without intermediate segmentation. To this end, we reformulate the problem by imposing piecewise constancy of the parameter field, and derive a specific proximal splitting optimization scheme. A key component of our framework is an efficient one-dimensional piecewise-affine estimator for vector-valued signals. The first advantage of our approach over segmentation-based methods is its absence of initialization. The second advantage is its lower computational cost which is independent of the complexity of the motion field. In addition to these features, we demonstrate competitive accuracy with other piecewise-parametric methods on standard evaluation benchmarks. Our new regularization scheme also outperforms the more standard use of total variation and total generalized variation.
2016
Sparsity and the optimality of splines for inverse problems: Deterministic vs. statistical justifications
Michael Unser, EPFL STI LIB
Meeting • 23 February 2016 • BM 4 233 • 00236
Abstract: In recent years, significant progress has been achieved in the resolution of ill-posed linear inverse problems by imposing l1/TV regularization constraints on the solution. Such sparsity-promoting schemes are supported by the theory of compressed sensing, which is finite dimensional for the most part. In this talk, we take an infinite-dimensional point of view by considering signals that are defined in the continuous domain. We claim that non-uniform splines whose type is matched to the regularization operator are optimal candidate solutions. We show that such functions are global minimizers of a broad family of convex variational problems where the measurements are linear and the regularization is a generalized form of total variation associated with some operator L. We then discuss the link with sparse stochastic processes that are solutions of the same type of differential equations.The pleasing outcome is that the statistical formulation yields maximum a posteriori (MAP) signal estimators that involve the same type of sparsity-promoting regularization, albeit in a discretized form. The latter corresponds to the log-likelihood of the projection of the stochastic model onto a finite-dimensional reconstruction space.
Fast 3D Reconstruction Method for Differential Phase Contrast X-ray CT
Mike McCann, EPFL STI LIB
Meeting • 08 March 2016 • BM 4 233 • 00237
Abstract: Our goal is fully 3D reconstruction for gating-based differential phase contrast X-ray CT, which we approach in the standard TV regularization + ADMM way. In this setting, the sinogram is large (for us, 1600px by 500px by 1200 views) and so is the volume to reconstruct (400px by 400px by 200px). We therefore need to take special care that we compute the ADMM iterations efficiently. This boils down to having fast algorithms applying for H^T and H^TH. In this talk, I'll describe our approach to these two operations.
ICASSP 2016
Pedram Pad, EPFL STI LIB
Test Run • 15 March 2016 • BM 4 233 • 00238
Abstract: Title 1: Optimal Isotropic Wavelets for Localized Tight Frame Representations Abstract 1: In this paper, we aim to identify the optimal isotropic mother wavelet for a given spatial dimension based on a localization criterion. Within the framework of the calculus of variations, we specify an Euler-Lagrange equation for this problem, and we find the unique analytic solutions. In the one- and two-dimensional cases, the derived wavelets are well known. Title 2: MMSE Denoising of Sparse and Non-Gaussian AR(1) Processes Abstract 2: We propose two minimum-mean-square-error (MMSE) estimation methods for denoising non-Gaussian first-order autoregressive (AR(1)) processes. The first one is based on the message passing framework and gives the exact theoretic MMSE estimator. The second is an iterative algorithm that combines standard wavelet-based thresholding with an optimized non-linearity and cycle-spinning. This method is more computationally efficient than the former and appears to provide the same optimal denoising results in practice. We illustrate the superior performance of both methods through numerical simulations by comparing them with other wellknown denoising schemes.
ISBI 2016
Michael Unser, Denis Fortun, EPFL STI LIB
Test Run • 06 April 2016 • BM 4 233 • 00239
Abstract: Title 1: User-Friendly Image-Based Segmentation and Analysis of Chromosomes Presenter: Prof. Unser Abstract 1: We designed two efficient and user-friendly tools for the segmentation and analysis of images containing chromosomes or, more generally, rod-shaped elements that are spread on microscopic slides. The segmentation tool allows to automatically extract the profile of each chromosome and to sort the collection of profiles in a karyotype image. The analysis tool is interactive and allows to extract quantitative measurements and annotate the relative position of the centromere to the chromosome extremities in a fast and reproducible way. The two methods rely on custom variants of parametric active contours. Both have been designed as user-friendly plug-ins for the open-source software ImageJ. Title 2: Isotropic resolution in fluorescence imaging by single particle reconstruction Presenter: Denis Abstract 2: Low axial resolution is a major limitation of fluorescence imaging modalities. We propose a methodology to achieve high isotropic resolution by reconstructingfluorescence volumes from observations of multiple particle replicates with different orientations. The challenge is to conciliate high reconstruction accuracy, requiring a large amount of input 3D data, with computational tractability. We achieve this goal by designing an iterative joint deconvolution and multiview reconstruction algorithm with an efficient augmented-Lagrangian based optimization. The computational cost is limited to only two FFTs per iterations, regardless of the number of input particles. We also adopt the nuclear norm of the Hessian as regularizer to avoid the usual staircase artifacts of the more standard total-variation. Experimental validation on realistic simulated data demonstrate the efficiency and accuracy of our method.
Learning-Based approach in Single Molecule localization microscopy
Silvia Colabrese, Italian Institute of Technology, Genova, Italy
Meeting • 19 April 2016 • 00240
Abstract: In Single Molecule localization microscopy there has been little space for the exploitation of machine learning techniques that in other fields have proven to be beneficial. During the last months we have been investigating the use of Support Vector Machine to boost the detection rate of molecules; the results favorably compare with the state of the art. I am going to present the work that has been done and the many open issues that still remain.
Steerable Wavelet Machines (SWM): Learning Moving Frames for Texture Classification
Adrien Depeursinge, Emmanuel Soubies, EPFL STI LIB
Meeting • 03 May 2016 • BM 4 233 • 00241
Abstract: Title 1: Steerable Wavelet Machines (SWM): Learning Moving Frames for Texture Classification Presenter 1: Adrien Abstract 1: We present texture operators encoding class-specific local organizations of image directions (LOID) in a rotation-invariant fashion. The LOIDs are key for visual understanding, and are at the origin of the success of the popular approaches such as local binary patterns (LBP) and the scale-invariant feature transform (SIFT). Whereas LBPs and SIFT yield handcrafted image representations, we propose to learn data-specific representations of the LOIDs in a rotation-invariant fashion. The image operators are based on steerable circular harmonic wavelets (CHW), offering a rich and yet compact initial representation for characterizing natural textures. The joint location and orientation required to encode the LOIDs is preserved by using moving frames (MF) texture representations built from locally-steered multi-order CHWs. In a second step, we use support vector machines (SVM) to learn a multi-class shaping matrix of the initial CHW representation, yielding data-driven MFs that are invariant to rigid motions. We experimentally demonstrate the effectiveness of the proposed operators for classifying natural textures. Title 2: Some results on MA-TIRF reconstruction and exact continuous penalties for l2-l0 minimization Presenter 2: Emmanuel Soubies Abstract 2: In the first part of this presentation, I will present some work related to Multi-Angle Total Internal Reflection Fluorescence (MA-TIRF) reconstruction. This microscopy technique is a method of choice to visualize membrane-substrate interactions. After an introduction on TIRF microscopy, I will present microscope calibration techniques which are essential for the success of reconstruction methods. Then, I will roughly introduce the reconstruction methods that can be used to solve the ill-posed inverse problem allowing to compute a quantitative depth map with high axial resolution. Finally, biological reconstructions on real samples will be presented. In a second time, I will focus on sparse approximation and more precisely on nonconvex continuous penalties approximating the l0-pseudo norm within the framework of the l0-regularized least squares problem. I will introduce the Continuous Exact l0 penalty (CEL0), an approximation of the l0-norm leading to a tight continuous relaxation of the l2-l0 criteria. Relationships between minimizers of the initial and relaxed functionals will be presented showing that the CEL0 functional provides an equivalent continuous reformulation of the l2l0 objective. Thanks to the continuity of this relaxation, recent nonsmooth nonconvex algorithms can be used to address its minimization. Finally, applications in signal processing will be presented and an unification on such continuous exact relaxations will be shortly commented.
Shape-Constrained Tracking with Active Contours
Virginie Uhlmann, EPFL STI LIB
Meeting • 17 May 2016 • BM 4 233 • 00242
Abstract: We propose a shape-constrained tracking framework based on active contours for the study of worm motility. The main ingredient of our approach is the formulation of a shape space, which defines a set of admitted transformations of the worm body. It allows for the decomposition of worm motion into a collection of modes, hence giving insights into the nature of the different locomotion patterns present in a dataset.
A reconstruction framework for coherent imaging
Ferréol Soulez, EPFL STI LIB
Meeting • 31 May 2016 • BM 4 233 • 00243
Abstract: Due to the impossibility to record the phase of visible light, coherent imaging involves intensity only measurements (with or without a reference wave). Thus, for image reconstruction in coherent imaging, we have to face two problems: - the (non-linear) phase retrieval problem to recover the phase of the light in the detector plane, - the (possibly non-linear) reconstruction problem to estimate the (complex) refractive index of the studied sample. In this talk, I propose to jointly solves these problems jointly as a constrained problem that is solved using a augmented Lagrangian formulation. The presentation will have three parts: - a glimpse on convex optimization - proximity operators for intensity measurement - application to telescope tomography.
Trainable shrinkage splines: inverse problems meet deep learning
Ha Nguyen, EPFL STI LIB
Meeting • 28 June 2016 • BM 4 233 • 00244
Abstract: In this talk, I will briefly review the recent approach in inverse problems in which not only the dictionaries but also the proximal mappings (shrinkage functions) are learned from the data. I'll discuss about the similarity between this approach and deep neural nets. I'll also share with you some of my theoretical observations about the proximal mappings and explain why splines provide good representations for such mappings. Finally, I'll present some of my initial experiments to give you an idea of what a learned shrinkage spline looks like.
Complete Compressed Sensing Framework for STEM Tomography
Laurène Donati, EPFL STI LIB
Meeting • 19 July 2016 • BM 4 233 • 00245
Abstract: A central challenge in scanning transmission electron microscopy (STEM) is to reduce the electron radiation dosage required for accurate imaging of 3D biological nano-structures. In this work, we demonstrate that random-beam scanning in STEM (RB-STEM) fulfills the "incoherence" condition required by the theory of compressed sensing when the image is expressed in terms of wavelets. We then propose a regularized tomographic reconstruction framework to recover high-quality images from RB-STEM datasets. Finally, we present a novel DigitalMicrograph® plug-in that implements stable random-beam scanning for data acquisition in STEM mode, leading to a further reduction in the heating of electron-sensitive biological samples. This complete application of compressed sensing principles to STEM paves the way for a practical implementation of RB-STEM and opens new perspectives for high-quality reconstructions in electron tomography.
K-space interpolation using CV(complex valued)-CNN & sparse and low-rank model of ALOHA
Kyong Jin, EPFL STI LIB
Meeting • 09 August 2016 • BM 4 233 • 00246
Abstract: In this presentation, I will present a complex-valued convolutional neural networks for accelerated MR problem. We could observe a connection between annihilating filter and convolutional network for accelerated MRI. Furthermore, recently, we extended ALOHA framework for sparse and low-rank model. We will briefly show some applications and flowchart.
ICIP 2016
Anaïs Badoual, EPFL STI LIB
Test Run • 20 September 2016 • BM 4 233 • 00247
Abstract: Title 1: Local Refinement for 3D Deformable Parametric Surfaces Abstract: Biomedical image segmentation is an active field of research where deformable models have proved to be efficient. The geometric representation of such models determines their ability to approximate the shape of interest as well as the speed of convergence of related optimization algorithms. We present a new tensor-product parameterization of surfaces that offers the possibility of local refinement. The goal is to allocate additional degrees of freedom to the surface only where an increase in local detail is required. We introduce the possibility of locally increasing the number of control points by inserting basis functions at specific locations. Our approach is generic and relies on refinable functions, which satisfy the refinement relation. We show that the proposed method improves brain segmentation in 3D MRI images. Title 2: An Inner-Product Calculus for Periodic Functions and Curves Abstract: Our motivation is the design of efficient algorithms to process closed curves represented by basis functions or wavelets. To that end, we introduce an inner-product calculus to evaluate correlations and L2 distances between such curves. In particular, we present formulas for the direct and exact evaluation of correlation matrices in the case of closed (i.e., periodic) parametric curves and periodic signals. We give simplifications for practical cases that involve B-splines. To illustrate this approach, we also propose a least-squares approximation scheme that is able to resample curves while minimizing aliasing artifacts. Another application is the exact calculation of the enclosed area.
Algorithmic Aspects of Compressive Sensing
Verner Vlacic, Cambridge University
Seminar • 03 October 2016 • BM 4 233 • 00248
Abstract: Ever since its inception, the theory of compressive sensing has been trying to explain and refine the incredible success of CS in practice. However, modern theory hinges on idealised optimisation models, which are solved by inexact algorithms in practice. We show that popular algorithms can fail badly even in the simplest of examples, which leaves us with several questions: How do we reconcile the theory and practice? More importantly, can we expand the theory so that it tells us exactly how to use the algorithms? In this talk we aim to address these issues.
High-quality parallel-ray X-ray CT back projection using optimized interpolation
Mike McCann, EPFL STI LIB
Meeting • 11 October 2016 • BM 4 233 • 00249
Abstract: Our X-ray reconstruction scheme relies on back projection of the measurements into the reconstruction domain, but computing this exactly is slow. In our previous work, we accelerated this with interpolation: we fit a continuous representation to samples of the signal, then sampled it at the required locations. In this work, we use a spline interpolation trick to improve the accuracy of the interpolation. Specifically, we apply a prefilter that orthogonally projects the underlying signal onto the space spanned by the interpolator before sampling it. We then build on this idea by using oblique projection, which simplifies the computation while giving effectively the same improvement in quality. Our experiments on analytical phantoms show that this refinement can improve the reconstruction quality for both filtered back projection and iterative reconstruction.
SIGGRAPH ASIA 2016
Daniel Schmitter, EPFL STI LIB
Test Run • 01 November 2016 • BM 4 233 • 00250
Abstract: Title: Smooth Shapes with Spherical Topology: Beyond Traditional Modeling, Efficient Deformation, and Interaction Abstract: In this talk we discuss the work that we are presenting at SIGGRAPH this year. Existing shape models with spherical topology are typically designed either in the \textit{discrete} domain using \textit{interpolating} polygon meshes or in the continuous domain using \textit{smooth} but \textit{non-interpolating} schemes such as subdivision or NURBS. Both polygon models and subdivision methods require a large number of parameters to model smooth surfaces. NURBS need fewer parameters but have a complicated rational expression and non-uniform shifts in their formulation. We present a new method to construct deformable closed surfaces, which includes the exact sphere, by combining the best of two worlds: a \textit{smooth} and \textit{interpolating} model with a continuously varying tangent plane and well-defined curvature at every point on the surface. Our formulation is considerably simpler than NURBS while it requires fewer parameters than polygon meshes. We demonstrate the generality of our method with applications ranging from intuitive user-interactive shape modeling, continuous surface deformation, shape morphing, reconstruction of shapes from parameterized point clouds, to fast iterative shape optimization for image segmentation. Comparisons with discrete methods and with non-interpolating approaches highlight the advantages of our framework.
Learning Optimal Shrinkage Splines for ADMM Algorithms
Ha Nguyen, EPFL STI LIB
Meeting • 22 November 2016 • BM 4 233 • 00251
Abstract: I'll talk about a learning approach to signal denoising in which the shrinkage function in the ADMM algorithm is parameterized by coefficients of a polynomial spline. The spline coefficients are learned through a gradient descent to minimize the mean square error between a collection of ground-truth signals and their reconstructions from noisy data. We also propose to impose various constraints on the shrinkage function based on theoretical observations. These constraints are translated nicely into linear constraints on the spline coefficients, which results in a simple learning algorithm using projected gradient descent. Experiments show that denoising with learned shrinkage splines are optimal for various types of signals, either sparse or non-sparse.
A multiple scattering approach to diffraction tomography
Luc Zeng, EPFL STI LIB
Meeting • 30 November 2016 • BM 4 233 • 00252
Abstract: The Lippman-Schwinger equation is a model of light propagation that takes into account multiple scattering. In this work, we present a nonlinear inverse problem framework to invert the Lippmann-Schwinger equation. We first describe an appropriate discretization of the free-space Helmholtz Green's function to improve the conditioning of the forward model. The inverse problem is formulated in the Total Variation framework and is solved via an ADMM minimisation scheme. The method is applied to the diffraction tomography setup. The objects are considered to be transparent. The main advantage of this method is to take into account multiple scattering, including reflections.
Machine Vision forum in Heidelberg
Virginie Uhlmann, EPFL STI LIB
Test Run • 17 August 2016 • BM 4 233 • 00253
Abstract: Title: Spline-based models for image segmentation Abstract: Splines provide a unifying framework for solving a whole variety of image-processing problems that are best formulated in the continuous domain. In particular, splines can be used to dene a particular type of active contour algorithm called spline-snakes [1]. Active contours (or snakes) are very popular methods for image segmentation that consist in a curve evolving in the image from an initial position to the boundaries of the object of interest. Many dierent snake algorithms exist, which can usually be grouped into three main categories, namely point-based, level sets and parametric snakes. Spline-snakes are a subcategory of parametric snakes which benet from a continuous-domain representation, hence involving less parameters and being easy to handle analytically. In addition, spline-snakes are well-suited for semi-automated analysis pipelines and therefore hold a strong potential for user-friendly segmentation frameworks. Spline-snake algorithms rely on two main ingredients. The first one is the denation of the snake model, which includes the choice of a spline generator that serves as basis function. The snake curve is then continuously-dened using the spline basis to interpolate between a collection of discrete control points on the image. The second ingredient is the so-called snake energy, an appropriately dened cost function that, upon minimization, drives the deformation of the snake curve to t object boundaries. The snake energy is generally composed of external and internal forces, which attract the curve towards prominent image features (data delity) or constrain its rigidity (regularization), respectively. Out of these two aspects (snake curve model and energy), a whole zoo of spline-snakes with different properties can be defined. In this way, spline-snakes can yield both multi-purpose segmentation methods as well as approaches specically tuned to match the features of particular problems. In this talk, we will present in more details the general spline-snake construction and illustrate its use through a collection of applications to segmentation in 2- and 3-D biomedical images. References [1] R. Delgado-Gonzalo, V. Uhlmann, D. Schmitter, and M. Unser, \Snakes on a Plane: A Perfect Snap for Bioimage Analysis," IEEE Signal Processing Magazine, vol. 32, no. 1, pp. 41--48, January 2015.
Decoding Epileptogenesis: A Dynamical System Approach
Prof. Francois Meyer, University of Colorado at Boulder
Seminar • 09 February 2016 • 00254
Opportunities in Computational Imaging for Biomicroscopy
Prof. Michael Leibling, Idap Research Institute
Seminar • 06 December 2016 • BM 4 233 • 00255
Abstract: Image-based characterization plays a central role when studying the mechanisms underlying dynamic biological systems. Despite tremendous advances in microscopy hardware, many live samples remain difficult to observe with of-the-shelf instruments, either because they are too dim or because they move too fast. In some cases, combining custom hardware arrangements or protocols with adapted image post-processing allows overcoming the physical limitations imposed by the sample or the instrument. I will describe several such end-to-end imaging methods that we developed in my lab, ranging from cardiac time-lapse imaging to temporal super-resolution. I will then present an ongoing effort to set up a platform for reproducible acquisition, processing, and sharing of dynamic, multi-modal data at Idiap.
Steerable template detection based on maximum correlation: preliminary results
Adrien Depeursinge, EPFL STI LIB
Meeting • 13 December 2016 • BM 4233 • 00256
Abstract: Object detection with invariance to rigid transformations is a common task in biomedical image analysis. In this work, we describe the design of polar-separable steerable filters correlating most with a training template. In a first step, optimal radial profiles are derived based on the Cauchy-Schwartz inequality in Fourier. The detection filter is constructed from higher-order complex Riesz transforms of the previously determined radial profile. In a second step, the radial profile is whitened with the power spectrum of the background to improve the discriminability of the filter. Preliminary experimental results are presented.
Lévy's Persian summers
Julien Fageot, EPFL STI LIB
Meeting • 18 October 2016 • 00257
Abstract: As you probably noticed, Alireza and Shayan came directly from Sharif University for a summer internship, respectively in 2015 and 2016. Together, we could characterize the regularity of the Lévy white noises (and if you understand the noise w, you are not too far from understanding the sparse process solution of Ls = w). As a signal processing application, with John Paul, we could quantify the compressibility of sparse processes. On Tuesday, I will talk about all these things.
2015
Total Variation Data Analysis - A Non-linear Spectral Framework for Machine Learning
Xavier Bresson
Seminar • 12 January 2015 • BIG 4.235 • 00225
Abstract: Machine Learning develops algorithms to identify patterns in large-scale and multi-dimensional data. This field has recently seen tremendous advances with the emergence of new powerful techniques combining the key mathematical tools of sparsity, convex optimization and relaxation methods. In this talk, I will present how these concepts can be applied to find tight solutions of NP-hard balanced cut problems for unsupervised data clustering, significantly overcoming state-of-the-art spectral clustering methods including Shi-Malik's normalized cut. I will also show how to design fast algorithms for the proposed non-convex and non-differentiable optimization problems based on recent breakthroughs in total variation optimization problems borrowed from the compressed sensing field.
An Operator-Based Approach to Biomedical Imaging
John Paul Ward, LIB |STI | EPFL
Seminar • 02 February 2015 • BIG 4.235 • 00226
Abstract: We consider operator-based models of biomedical images and develop a framework for image processing tasks. Approximations using operator-like splines and wavelets will be treated. We illustrate our method in various applications.
Single Particle Reconstruction in Fluorescence Imaging
Denis Fortun, LIB | STI | EPFL
Seminar • 16 February 2015 • BIG 4.235 • 00227
Abstract: One of the new challenges of single particle reconstruction is protein mapping inside the particle structure with the help of fluorescence imaging. To this end, we present a method for reconstruction of single particles from widefield data. We propose a joint deconvolution and multiview reconstruction approach dedicated to 3D fluorescence imaging. Our method is able to handle a large amount of 3D observations with a computationally efficient augmented-Lagrangian based optimization. Experimental results on synthetic data attest the performance gain yielded by our method over standard tomography and averaging approaches.
Scaling Families of Fourier Multipliers and Tight Wavelet Frames
Zsuzsanna Püspöki, LIB | STI | EPFL
Seminar • 02 March 2015 • BIG 4.235 • 00228
Abstract: In analogy with steerable wavelets, I will present a general construction of adaptable tight wavelet frames, with an emphasis on scaling operations. In particular, the derived wavelets can be dilated by a procedure comparable to the operation of steering steerable wavelets. Furthermore, the fundamental aspects of the construction are the same; an admissible collection of Fourier multipliers is used to extend a tight wavelet frame, and the scale of the wavelets is adapted by scaling the multipliers. As an application, the proposed wavelets can be used to provide increased frequency localization, and importantly, the localized frequency bands specified by this construction can be efficiently adapted using matrix multiplication. Numerical experiments are presented to justify the method, and I also present results for feature extraction from real data.
Efficient Pattern Calibration and Image Super-Resolution for Structured Illumination Microscopy
Ning Chu, LIB | STI | EPFL
Seminar • 16 March 2015 • BM 4.235 • 00229
Abstract: Structured Illumination Microscopy (SIM) has been one of the most widely used and the most effective methods in cell-structure imaging since last decade. However, the SIM is very sensitive to the imperfection of the illumination pattern--the exact angle-rotations and phase-shifts etc. Without pattern calibrations, most of the state of the art methods can hardly achieve high resolution in practical use. In order to overcome this inevitable drawback, we first propose an efficient calibration approach based on the cross-correlation between the modulated frequency harmonics. Our calibration approach is able to estimate all the phase shifts and angle rotations just from the observed wide-filed fluorescent images, even in the worst case where the pattern cannot be seen at all in the observed images. After calibrations, we propose a robust and efficient regularisation approach based on TV-L1 and ADMM techniques. The proposed approach can obtain at least as good results as the state of the art SIM methods do, but get less artefact blurs and more detail contrasts. Finally, we show that the phase-shifts are not necessary to be estimated, whereas they are indispensable for some of the classical SIM methods. Without knowing all of the phases, our proposed regularisation approach can still work well and get much better image reconstructions. We will present the method validation through simulations and various real data from our partners.
Generalized Poisson Summation Formula for Functions of Polynomial Growth
Ha Nguyen, LIB | STI | EPFL
Seminar • 13 April 2015 • BM 4.235 • 00230
Abstract: The Poisson Summation Formula (PSF), which relates the sampling of an analog signal with the periodization of its Fourier transform, plays a key role in the classical sampling theory. In its current forms, the formula is only applicable to a limited class of signals in $L_1$. However, this assumption on the signals is too strict for many applications in signal processing that require sampling of non-decaying signals. In this talk I will discuss a generalized version of the PSF for functions living in weighted Sobolev spaces that do not impose any decay on the functions. The only requirement is that the signal to be sampled and its weak derivatives up to order $d/p$ grow slower than a polynomial in the $L_p$ sense, for some $p\in (1,2]$. The generalized PSF will be interpreted in the language of distributions.
ISBI 2015 Test Run
Daniel Schmitter, Virginie Uhlmann, Zsuzsanna Püspöki, Adrien Depeursinge, LIB | STI | EPFL
Seminar • 25 March 2015 • BM 4.235 • 00231
Abstract: Title 1: Similarity-Based Shape Priors for 2D Spline Snakes Title 2: Tip-Seeking Active Contours for Bioimage Segmentation Title 3: Fast Detection and Refined Scale Estimation Using Complex Isotropic Wavelets Title 4: Optimized Steerable Wavelets for Texture Analysis of Lung Tissue in 3-D CT: Classification of Usual Interstitial Pneumonia
On the Development of the Concepts of Generalized Differentiation
Igor Podlubny, Technical University of Kosice, Slovakia
Seminar • 23 April 2015 • BIG 4.235 • 00232
Abstract: The idea of differentiation of non-integer order arose immediately after the birth of the classical differential calculus. The subsequent development led much later to such notions like left- and right-sided fractional-order derivatives, variable-order derivatives, distributed-order derivatives, all in various forms. Numerical methods for computation of generalized derivatives and for solution of differential equations with generalized derivatives were elaborated due to rapidly growing number of applications in science and engineering. We will take a somewhat non-traditional look a the development of such concepts and methods.
Joint Reconstruction and Segmentation Using the Potts Model
Martin Storath, EPFL STI LIB
Meeting • 11 May 2015 • BM 4 233 • 00233
Abstract: We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford-Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp-Logan phantom from 7 angular views only. We illustrate the practical applicability on a real PET dataset. As further applications, we consider spherical Radon data as well as blurred data. This is joint work with Jürgen Frikel, Michael Unser, and Andreas Weinmann.
SAMPTA 2015
Michael Unser, Virginie Uhlmann, Julien Fageot, John-Paul Ward, EPFL STI LIB
Meeting • 08 June 2015 • BM 233 • 00234
Abstract: Michael: Sampling and (sparse) stochastic processes: a tale of splines and innovation Virginie: Statistical Optimality of Hermite Splines Julien: Interpretation of Continuous-time Autoregressive Processes as Random Exponential Splines John Paul: Compressibility of symmetric-alpha-stable processes
Are (Sparse Processes)^2?
Julien Fageot, EPFL STI LIB
Meeting • 29 July 2015 • BM 4 233 • 00235
Abstract: Since An introduction to sparse stochastic processes ended up on your bedside table, we---the sparse process team---have kept on going forward. So what is new about sparse processes? Mainly, we are investigating the question of the regularity of sparse processes. We already have nice results, but also zillions of conjectures that I will present to you next Monday. Knowing about the regularity of a process gives information about the quality of its wavelet approximation. As a consequence, we will be able to understand why non-Gaussian sparse processes are more compressible than their boring Gaussian fellows. We will also introduce some parameters that quantify the level of compressibility of a given process. This should hopefully help us to answer this fundamental question: is a sparse process sparse?
2014
Wavelet-based Detection and Classification of Local Symmetries
Zsuzsanna Püspöki, EPFL STI LIB
Seminar • 13 January 2014 • BM 4.233 • 00203
Abstract: The ability to detect edges and local symmetry centers (or symmetric junctions) can be very useful for the quantitative analysis of microscopic images. For example, certain experiments in stem-cell research rely on the accurate detection of cell shape and extracellular structures (like tight junctions) that exhibit polygonal shapes. Also, in polycrystalline materials such as the hexagonal graphene, it is fundamental to detect line defects since they strongly affect the physical and chemical properties of grain boundaries. In this presentation, we describe an algorithm for the detection of local symmetries and their classification in a template-free fashion. The algorithm is based on the circular harmonic wavelet transform, which distributes the energy of the signal among a set of angular harmonics. Based on this angular distribution, we propose a measure of symmetry and a hypothesis test for local symmetry at each pixel. Using the noted measure, we also formulate an approximate maximum-likelihood classifier in terms of the orders of local symmetry. We provide experimental results on synthetic images, biological micrographs, and electron-microscopy images to demonstrate the performance of the algorithm.
Phase Microscopy: A Variational Approach for Optical Phase Retrieval Problem
Emrah Bostan, EPFL STI LIB
Seminar • 27 January 2014 • BM 4.233 • 00204
Exponential Hermite Splines for the Analysis of Biomedical Images
Virginie Uhlmann, EPFL STI LIB
Seminar • 31 March 2014 • BM 4.233 • 00205
Abstract: We present a new exponential B-spline basis that enables the construction of active contours for the analysis of biomedical images. Our functions generalize the well-known polynomial Hermite B-splines and provide us with a direct control over the tangents of the parameterized contour, which is absent in traditional spline-based active contours. Our basis functions have been designed to perfectly reproduce elliptical and circular shapes. Moreover, they can approximate any closed curve up to arbitrary precision by increasing the number of anchor points. They are therefore well-suited to the segmentation of the roundish objects that are commonly encountered in the analysis of bioimages. We illustrate the performance of an active contour built using our functions on some examples of real biological data.
FFT-Cost Implementation of HTH in Computed Tomography
Masih Nilchian, EPFL STI LIB
Seminar • 03 March 2014 • BM 4.233 • 00206
Abstract: In order to formulate the reconstruction in computed tomography as an inverse problem, it is required to discretize the forward operator. One rigorous approach is projecting the object on a closed shift-invariant space V={f(x)=sum c_k s(x-k)} where s(x) is a generating function and taking advantage of linearity and pseudo shift-invariant property of the Radon transform to discretize the forward operator. Having a rigorous formulation introduces almost heavy computation cost. In this talk, we aim at addressing this issue. First, we present the necessary conditions on s(x) such that HTH be a filtering operator. Second, we consider how good this way of implementing works while using B-splines as basis functions.
Variational Justification of Cycle Spinning for Inverse Problems
Ulugbek Kamilov, EPFL STI LIB
Seminar • 10 March 2014 • Bm 4.233 • 00207
Abstract: Estimating a signal from limited and noise-corrupted linear observations is a fundamental problem in signal processing. Wavelet-based methods seek for a signal that admits a sparse representation in the wavelet domain. Cycle spinning is a technique that is commonly used to dramatically improve the performance of standard wavelet-based methods. The algorithm typically cycle spins by repeatedly translating and denonising the current estimate via basic wavelet-denoising and then translating back; at each iteration. To date, no theoretical convergence results are known for cycle spinning. Here, we prove that the algorithm is guaranteed to convergence to the minimum of some global cost-function incorporating all wavelet-shifts. The proof relies on the stochastic optimization theory.
Jump Sparse Recovery Using the Potts Model
Martin Storath, EPFL STI LIB
Seminar • 10 February 2014 • BM 4.233 • 00208
Abstract: We recover jump-sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse reconstructions.
Compressive Sensing and Sparse Signal Representation of Ultrasonic Signals for Structural Health Monitoring Applications
Alessandro Perelli, University of Bologna, Italy
Seminar • 12 March 2014 • BM 4.233 • 00209
Abstract: Passive source localization in dispersive systems with sparse sensors array represent a fundamental issue in applications such as seismic, radar, underwater acoustics, wireless transmission. In this presentation a new in-situ Structural Health Monitoring (SHM) system based on wave propagation approach able to assess damages and to identify the location of acoustic emission (AE) sources due to impacts is shown. When we deal with such channels, it is necessary to compensate the frequency dependent propagation and then the localization is achieved from time-difference-of-arrival (TDOA) between sensor outputs. In this presentation a novel impact localization algorithm based on the frequency warping unitary operator applied to wavelet multiresolution analysis will be displayed. Unitary frequency warped representation is important to analyze class of signal covariant to group delay shift as those propagating through frequency-dependent channels. Finally a compressive acquisition scheme of Lamb wave signal for damage detection will be presented. Compressive Sensing has emerged as a potentially viable technique for the efficient acquisition that exploits the sparse representation of dispersive ultrasonic guided waves in the frequency warped basis. The framework is applied to lower the sampling frequency and to enhance defect localization performances of Lamb wave inspection systems. The approach is based on the inverse Warped Frequency Transform as the sparsifying basis for the Compressive Sensing acquisition and to compensate the dispersive behaviour of Lamb waves.
The Aggregation Framework for Optical Flow Estimation
Denis Fortun
Seminar • 17 April 2014 • BM 4.233 • 00210
Abstract: Optical flow estimation methods are usually divided in two main classes, regarding the local or global extent of the spatial coherency constraint imposed on the motion field. Local parametric approaches have clearly been outperformed by global regularized models, due to the difficulty to assess an appropriate local domain for parametric motion estimation. In this talk, we present a generic aggregation paradigm addressing this problem, based on purely local candidates estimations, combined in a subsequent global aggregation step. Based on this versatile framework, we address several issues. Two aggregation methods are presented, the first operating in a discrete framework with move-making graph cuts, and the second performing variational optimization of a sparsity constrained combination of candidates. In each case, no motion segmentation is required and multi-resolution schemes are avoided. The locality of regularized models is exploited with a variational computation of candidates, and we experimentally demonstrate that locally affine estimations are sufficient to produce highly accurate candidates. Finally the aggregation model is also adapted to handle two major issues of motion estimation, namely Illumination changes and occlusions. The performance of this approach is demonstrated on standard computer vision benchmarks, and it is shown to be particularly adapted to solve specific problems occurring in fluorescence imaging.
Fractional Calculus - Fractional-order Differential Equations
Tomás Skovránek, EPFL STI LIB
Seminar • 12 May 2014 • BM 4.233 • 00211
Abstract: Fractional-order differential equations (FDEs) and their numerical solutions are currently a rapidly developing field of research. They open new horizons in description of dynamical systems. Compared to classical integer-order models, FDEs provide a powerful instrument for description of memory and hereditary properties of real systems. Ordinary and partial differential equations of fractional order can be applied in modeling many physical and engineering problems. Finding accurate and efficient approximate methods and numerical techniques for solving FDEs is the goal of many research works (e.g. Blank (1996); Diethelm (1997); Diethelm and Walz (1997); Diethelm and Ford (2002); Diethelm et al. (2002, 2004); Gorenflo (1997); Podlubny (1997); Kumar and Agrawal (2006)). In opposite to methods based on iterations, the Podlubnys matrix approach suitable for solving both ordinary and partial differential equations of integer and fractional order (see Podlubny (2000); Podlubny et al. (2009); Podlubny et al. (2013)) is considering the whole time interval of interest at once. The system of algebraic equations is obtained by approximating the equation in all nodes simultaneously. To demonstrate the possibilities of using FDEs in modeling real systems, some of the applications (e.g. modeling the behavior of national economies) will be presented.
Continuous-Domain Linear (Spline) Projection Operators and Projector PCA for Shape Space Representation
Daniel Schmitter, EPFL STI LIB
Seminar • 08 July 2014 • BM 4.233 • 00212
Abstract: The standard approach to define shape spaces is to consider shapes that are described by an ordered set of points or landmarks. They are geometrically normalized by aligning them to acommon reference in order to remove some effects of rigid-body transforms. A standard PCA is then applied. By normalizing, a bias is introduced in the model, because computing distances between normalized shapes generally does not yield the same result as for non-normalized shapes. Our alternative proposal is to define a finite-dimensional vector space that contains all possible shapes w.r.t. a given linear transformation of a reference shape. The idea is to generically characterize a shape space as a subspace containing all shapes that are related to a reference shape by a specific transformation. Thereby, the shape space itself is implicitly characterized by the orthogonal projection onto the vector space. This allows us to compute the "best match" among curves defined by a subspace w.r.t. an arbitrary shape. Our method does not include any normalization step prior to the shape space definition and hence, no bias is introduced when comparing shapes. Describing shape spaces by projection operators onto vector spaces provides new possibilities to compare arbitrary shapes with each other. The concept can be extended to compute eigenshapes via projector PCA, as well as other shape statistics. The computation of such measures without the need of normalizing implies that they are invariant to the transformation that defines the shape space.
Stochastic PDEs and Wavelet Approximation
John Paul Ward, EPFL STI LIB
Seminar • 29 July 2014 • BM 4.233 • 00213
Abstract: Stochastic partial differential equations (PDEs) are commonly used to model data for a variety of image processing tasks. In this talk, we address approximation theoretic tools for representing and interpreting data coming from medical imaging devices. Convergence and stability estimates for wavelet bases will be presented.
Signal Representations: From Images to Irregular-domain Signals
Ha Quy Nguyen, EPFL STI LIB
Seminar • 11 August 2014 • 00214
Abstract: In this talk, I will briefly present parts of my work at University of Illinois on the representations (either Fourier-like or wavelet-like) of different classes of signals ranging from cartoon-like images to signals living on graphs. The three main topics of the talk are: (1) nonlinear approximation of directional wavelets; (2) spherical harmonics for inverse rendering; and (3) graph wavelet transforms and applications.
Linear Structured Illumination Microscopy Applied in Super-Resolution Imaging
Ning Chu, EPFL STI LIB
Seminar • 01 September 2014 • BM 4 233 • 00215
Abstract: Fluorescence microscopy is a powerful tool for investigating structural organization and dynamical processes on the cellular level. But its spatial resolution is very limited by the diffraction of light. Since the year of 2000, the linear Structured Illumination Microscopy (SIM) has become a breakthrough method, and it can achieve nearly 2 times of spatial resolution of fluorescence microscopy. However, the SIM is sensitive to optical pattern aberrations and noise interference, which are unavoidable in biological experiments. In this presentation, we firstly introduce the linear SIM models applied in 2D and 3D super-resolution imaging. Our contributions mainly focus on analyzing the Point Spread Function (PSF) in lateral and axial directions, as well as in multichannel light frequencies. To improve the SIM models, some techniques of inverse problems such as deconvlution and regularization are thus used in favor of the SIM robustness. Finally, we discuss the planning of the improved SIM in treating the newly-obtained real data offered by BIOP lab.
Blind Deblurring and Blind Super-Resolution Using Internal Patch Recurrence
Tomer Michaeli, Wiezmann Institute of Science, Israel
Seminar • 11 September 2014 • BM 4 233 • 00216
Abstract: Small image patches tend to recur at multiple scales within high-quality natural images. This fractal behavior has been used in the past for various tasks including image compression and super-resolution. We show that this phenomenon can also be harnessed for "blind deblurring" and for "blind super-resolution", that is for removing blur or increasing resolution without a-priori knowledge of the associated blur kernel. Our key observation is that the source of the patch recurrence phenomenon is the repetitions of structures at various scales in the continuous scene. However, the way by which this continuous-domain phenomenon is manifested in the discrete image, depends on the imaging process. In particular, patches tend to repeat 'as is' in discrete images taken under ideal imaging conditions, but much less so in blurry images. These deviations from ideal patch recurrence can thus provide a cue for recovering the (unknown) blur kernel. Specifically, we show that the correct blur kernel is the one which maximizes the similarity between patches across scales of the image. Extensive experiments indicate that our approach leads to state of the art results, both in deblurring and in super-resolution.
Texture-Based Computational Models of Biomedical Tissue in Radiological Images: Digital Tissue Atlases and Correlation with Genomics
Adrien Depeursinge, EPFL STI LIB
Seminar • 29 September 2014 • BM 4 233 • 00217
Abstract: Modern multidimensional imaging in radiology yields much more information than the naked eye can appreciate. Computerized quantitative image analysis can make better use of the image content by yielding exhaustive, comprehensive and reproducible analysis of imaging features, which spawned the new field of radiomics. It has the potential to surrogate and surpass invasive biopsybased molecular assays with the ability to capture intralesional heterogeneity in a noninvasive way. Radiomics is not a mature field of study, though. First, current computational models of biomedical tissue in multidimensional radiological protocols lack the appropriate framework for leveraging local image directions, which showed to be most relevant to characterize the geometry of 3D biomedical texture. Second, most approaches proposed in the literature are assuming that the tissue properties are homogeneous over the tumors or organs, which is inadequate in most cases. We developed computational models of multi-‐dimensional morphological properties of biomedical tissue. The Riesz transform and support vector machines are used to learn the organization of image scales and directions that is specific to a given biomedical tissue type. The models obtained can be steered analytically to enable rotation-‐covariant image analysis. While most rotation-‐invariant approaches discard precious information about image directions, rotation-‐covariant analysis enables modeling the local organization of image directions independently from their local orientation. Experimental evaluation revealed high classification accuracies for even orders of the Riesz transform, and suggested high robustness to changes in global image orientation and illumination. The proposed computational models were able to fit a wide range of textures and tissue structures. Future work includes the optimization of the steerable texture models to enable more flexible template designs with both continuous scale characterization and compact support. The models will be located in organ anatomy to create personalized phenotyping of diseases and estimate underlying genomic signatures. These digital organ models can be used to diagnose, assess treatment response, and predict prognosis with higher precision.
ICIP 2014 Test Run
Emrah Bostan, Julien Fageot, Martin Storath, Pedram Pad, Sander Kromwijk, EPFL STI LIB
Test Run • 20 October 2014 • BM 4 233 • 00218
Abstract: Title 1: Phase Retrieval by Using Transport-of-Intensity Equation and Differential Interference Contrast Microscopy Title 2: Statistics of Wavelet Coefficients for Sparse Self-Similar Images Title 3: Unsupervised Texture Segmentation Using Monogenic Curvelets and the Potts Model Title 4: VOW: Variance Optimal Wavelets for the Steerable Pyramid Title 5: High-Performance 3D Deconvolution of Fluorescence Micrographs
ISBI 2014 Test Run
Zsuzsanna Püspöki, Virginie Uhlmann, Daniel Schmitter, EPFL STI LIB
Test Run • 16 April 2014 • BM 4 233 • 00219
Test Run for Curves and Surfaces 2014
Pedram Pad, EPFL STI LIB
Test Run • 10 June 2014 • BM 4 233 • 00220
Abstract: Matched Wavelet-Like Bases for the Decoupling of Sparse Processes
In the land of the blind, the one-eyed man is king: non-blind, myopic, blind or shift varying deconvolution in biological imaging
Ferréol Soulez, Centre de Recherche Astrophysique de Lyon, Université Lyon 1, France
Seminar • 21 November 2014 • BM 4.235 • 00221
Abstract: From the telescope of Galileo to the recent Nobel prize about nanoscopy, many breakthroughs in instrumentation have lead to major scientific discoveries. However, some recent instrumental achievements have almost reached theoretical bounds in terms of performance high throughput optics, very high quantum efficiency detectors, single molecule localization,...) and in some cases (e.g. in the visible domain) future improvements will be harder and harder to obtain.
In biological imaging, beside fluorophore engineering and optics, signal processing may be one major source of future breakthroughs. Indeed, with the huge (and cheap) computing power now available, it is possible to numerically invert the image formation process and gather most of the information diluted in the data.
In this context, I will present my work in deconvolution - the archetype of these "inverses problems" - for 3D fluorescence micro-graphs with results on both simulated and real data. However, two main problems still prevent the dissemination of such a method in the bio-imaging community: (i) the lack of knowledge of the microscope response (the PSF) and (ii) the fact that this PSF may vary in depth and along the field of view. I will discuss two different approaches to estimate the PSF directly from the data: the blind deconvolution and the myopic deconvolution when the PSF is known up to few parameters. Finally, I will present some ways to model fast and accurate "shift variant" operators that will be used for deconvolution.
Feeding the Hermite Snake
Virginie Uhlmann, LIB | STI | EPFL
Seminar • 01 December 2014 • BIG 4.235 • 00222
Abstract: After wandering for some times in the world of steerable wavelets, we moved to the spline universe and developed the Hermite snake. This active contour model benefits from a variety of interesting properties that will be recalled, the most important one being its ability to generate non-smooth curves. (Extremely) recently, we proposed a novel approach making use of this capability to increase the robustness of automated segmentation. The method relies on automatically detected features, and therefore opens the possibility for a connection with our previous works on keypoints. The presentation aims at describing our recent work in the light of a PhD student's non-smooth thesis timeline. We will present preliminary results using our method, and describe our future goals. Spoiler: everyone is happy at the end. To be consistent with the title of the presentation, croissants will be provided.
Beyond Scale Invariance: Conformal Invariance
Clement Hongler, EPFL
Seminar • 08 December 2014 • BIG 4.235 • 00223
Abstract: In this talk I will give an overview of results in probability and physics that deal with two-dimensional theories of curves and fields that possess conformal invariance. There are some non-trivial classification results, as well as remarkable exact computations. I hope this can give some useful people working in image processing.
Improved Variational Denoising of Flow Fields with Application to Phase-Contrast MRI Data
Emrah Bostan, LIB | STI | EPFL
Seminar • 15 December 2014 • BIG 4.235 • 00224
Abstract: We propose a new variational framework for the problem of reconstructing flow fields from noisy measurements. The formalism is based on regularizers penalizing the singular values of the Jacobian of the field. Specifically, we rely on the nuclear norm. Our method is invariant with respect to fundamental transformations and can be efficiently solved. We conduct numerical experiments on several phantom data and report improved performance compared to existing vectorial extensions of total variation and curl-divergence regularizations. Finally, we apply our reconstruction method to an experimentally-acquired phase-contrast MRI recording for enhancing the data visualization.
2013
Uniqueness Results for Autoregressive Models
John Paul Ward, EPFL STI LIB
Seminar • 02 September 2013 • BM 4.233 • 00194
Abstract: Based on the assumption of a continuous-time autoregressive model, we consider uniform sampling. Within this framework, it is known that two distinct models can give rise to the same sample data; however, there is evidence to suggest that minimally restricting the continuous-time parameters can produce a collection whose samples are unique among all autoregressive models of a given order. In this talk, we shall discuss the uniqueness property and related results.
Innovation Model: Mathematics aspects and applications
Julien Fageot, EPFL STI LIB
Seminar • 07 October 2013 • 00195
Abstract: The innovation model is a continuous-domain and stochastic model for sparse signals. Its role is to explain the empirical statistic behavior of sparse signals. As a recent mathematical tool for signal processing, its formalism is still under development and requires mathematical investigations. In my last presentation, I described results concerning the definition of the sparse stochastic processes underlying the model. On Monday, I would like to (i) recall the main mathematical principles and questions inherent to the model, (ii) present the new theoretical challenges we are recently tackling, and (iii) propose some applications of the model to particular problems of signal processing. And, there will be croissants.
Local Image Analysis Using Higher-Order Riesz Transforms
Ross Marchant, James Cook University / CSIRO Computational Informatics
Seminar • 04 November 2013 • BM 4.233 • 00196
Abstract: The Riesz transform can be used to model local image structure as a superposition of sinusoids, where phase describes feature type (line or edge) and amplitude describes feature strength. Current models consist of one or two sinusoids and are derived analytically, limiting the order of Riesz transform used. In this talk, we introduce an expanded set of signal models. The single sinusoidal model of the monogenic signal is modified to have a residual component, allowing higher-order Reisz transforms to be included in the derivation. This improves the parameter estimation and leads to a new method of detecting junctions and corners. Following on, a multi-sinusoidal model consisting of any number of sinusoids is described, allowing features consisting of any number of lines or edges to be analysed. To find the model parameters, a recent method from super-resolution theory is applied. Finally, junctions and corners are not well modelled by sinusoids. To analyse these features we propose a model consisting of the superposition of a 2D steerable wavelet at multiple amplitudes and orientations. The component wavelet corresponds to either a line segment or an edge segment, depending on the feature of interest.
Investigation of the Mathematical Model of an Optical Microscope
Viktoriia Chorna, National Technical University of Ukraine Kyiv Polytechnic Institute / Optical Engineering Department
Seminar • 18 November 2013 • BM 4.233 • 00197
Abstract: The work is devoted to the mathematical description of 3D signal transformation in the microscope imaging and illuminating channels. To analyze image forming in the microscope it is need to use different characteristics of the system. The 3D point spread function is a three dimensional impulse response of an optical system. This function is connected with the optical transfer function via direct and inverse three-dimensional Fourier transforms. This function is useful for evaluation of spatial resolution of microscope optics in three-dimensional space. That is why calculation and analysis of a 3D PSF is very important in optical microscopy. The modelling of image recording should take into account the microscopy characteristics for accurate estimation of the quality of the captured image.
Wavelet-Based Expansions and Simulation of Stochastic Processes
Ievgen Turchyn, UNIL
Seminar • 25 November 2013 • 00198
Hyperbolic Wavelet-Based Methods in Nonparametric Function Estimation and Hypothesis Testing
Jean-Marc Freyermuth, ORSTAT and Leuven Statistics Research Center, K.U.Leuven, Belgium
Seminar • 29 November 2013 • BM 4.233 • 00199
Abstract: In this talk we are interested in nonparametric multivariate function estimation. In Autin et al. (2012), we determine the maxisets of several estimators based on thresholding of the empirical hyperbolic wavelet coefficients. That is we determine the largest functional space over which the risk of these estimators converges at a chosen rate. It is known from the univariate setting that pooling information from geometric structures (horizontal/vertical blocks) in the coefficient domain allows to get large maxisets (see e.g Autin et al., 2011a,b,c). In the multidimensional setting, the situation is less straightforward. In a sense these estimators are much more exposed to the curse of dimensionality. However we identify cases where information pooling has a clear benefit. In particular, we identify some general structural constraints that can be related to compound models and to a minimal level of anisotropy. If time allows we will also discuss either the application of such methods for estimating the time-frequency spectrum of a (zero mean) non-stationary time series with second order structure which varies across time (in the spirit of (Neumann and von Sachs, 1997)); or how the geometry of the hyperbolic wavelet basis allows to construct optimal testing procedures of some structural characteristics of the estimand.
Snakes: Everything You Always Wanted to Know About (but Were Afraid to Ask)
Ricard Delgado-Gonzalo, EPFL STI LIB
Seminar • 02 December 2013 • BM 4.233 • 00200
Abstract: Segmentation is one of the key tasks in image analysis. In medicine, the anatomical structures that appear in magnetic resonance (MR) or computed tomography (CT) scans are often segmented from the image for use in surgical planning, navigation, diagnosis, and therapy evaluation. In biology, the extraction of accurate cell outlines allows one to perform quantitative statistical measurements within cell structures avoiding spurious fluctuations from the image background. Active contours (a.k.a. snakes) constitute a computationally attractive framework for image segmentation. The aim with this talk is to review the main aspects of active contours and present an extended and inclusive taxonomy of different snake variants (2D and 3D). The session will serve as a tutorial to design and implement new spline-based snakes adjusted to different imaging modalities.
Active Contour for the Detection of Coronary Artery in Ultrasonography
Shogo Hiramatsu, University of Tokyo, Tokyo, Japan
Seminar • 16 December 2013 • BM 4.233 • 00201
Abstract: Our goal was to obtain 3D model of coronary artery from ultrasonography in order to assist heart surgery. Even though ultrasonography is useful because of non-invasiveness and realtimeness, the modality has poor resolution and significant noise. Based on those points, we developed the system which enables us to segment coronary artery from the noisy image and to make 3D model of that. This research realized the system as ImageJ plugin written in Java. The system utilized active contour as segmentation algorithm. Spline was applied to the active contours body. The active contour was implemented to evolve so that penalty function would be minimized. The penalty consisted of gradient penalty, statistical penalty and constraint penalty. The energies was computed efficiently using pre-integrated images and Greens theorem. The active contour was able to avoid self-intersection in order not to violate the Greens theorem. The Self-intersection check was implemented with sweep line algorithm. The system applies the segmentation algorithm to the sequent images of ultrasonography movie and creates 3D model by volume rendering. The ultrasonography movie which was used for validation consisted of 120 frames. Movie segmentation experiments were held 10 times and all of them was succeeded without sticking out. This research is supposed to contribute to practical use of the active contour.
Bayesian Approach in Acoustic Source Localization and Imaging
Ning Chu, Laboratoire des signaux et systèmes (l2s), SUPELEC, France
Seminar • 23 December 2013 • BM 4.233 • 00202
Abstract: Acoustic imaging is an advanced technique for acoustic source localization and power reconstruction using limited measurements at microphone sensor array. This technique can provide meaningful insights into performances, properties and mechanisms of acoustic sources. It has been widely used for evaluating the acoustic comfort in automobile and aircraft industries. Acoustic imaging methods often involve in two aspects: a forward model of acoustic signal (power) propagation, and its inverse solution. However, the inversion usually causes a very ill-posed inverse problem, whose solution is not unique and is quite sensitive to measurement errors. Therefore, classical methods cannot easily obtain high spatial resolutions between two close sources, nor achieve wide dynamic range of acoustic source powers. In this thesis, we firstly build up a discrete forward model of acoustic signal propagation. This signal model is a linear but underdetermined system of equations linking the measured data and unknown source signals and positions. Based on this signal model, we set up a discrete forward model of acoustic power propagation. This power model is both linear and determined. It can directly reflect the relationship between the measurements and source powers. In the forward models, we consider the measurement errors to be mainly composed of background noises at sensor array, model uncertainty caused by multi-path propagation, as well as model approximating errors. For the inverse problem of the acoustic power model, we firstly propose a robust super-resolution approach with the sparsity constraint, so that we can obtain very high spatial resolution in strong measurement errors. But the sparsity parameter should be carefully estimated for effective performance. Then for the acoustic imaging with large dynamic range and super resolution, we propose a robust Bayesian inference approach with a sparsity enforcing prior. This sparse prior can better embody the sparsity characteristic of source distribution than the sparsity constraint. All the unknown variables and parameters can be automatically estimated by the Joint Maximum A Posterior (JMAP) estimation. However, this JMAP confronts a non-quadratic optimization and causes huge computational cost. In order to accelerate the JMAP estimation, we investigate an invariant 2D convolution operator to approximate acoustic power propagation model. Furthermore, we consider more practical cases: the measurement errors are spatially variant (non-stationary) at different senors, rather than the ideal Gaussian white one. The sparsity enforcing distribution can be more accurately modeled by the Student's-t priors. In these cases, the JMAP confronts more limitations than advantages. Therefore, we apply the Variational Bayesian Approximation (VBA) to overcome the drawbacks of the JMAP. Above all, proposed approaches are validated by simulations, real data from wind tunnel experiments of Renault S2A, as well as the hybrid data. Compared with some typical state-of-the-art methods, the main advantages of proposed approaches are robust to measurement errors, super spatial resolutions, wide dynamic range and no need for source number or Signal to Noise Ration (SNR) beforehand.
2012
Snakes. From active contours to active surfaces.
Ricard Delgado Gonzalo, EPFL STI LIB
Seminar • 06 February 2012 • 00172
Abstract: Snakes are effective tools for image segmentation. Within a 2D image, a snake is a 1D curve that evolves from an initial position, which is usually specified by a user, toward the boundary of an object. Within a 3D image, a snake is represented by a 2D surface. In the literature, these methods are also known as active contours or active surfaces. The snake evolution is formulated as a minimization problem. The associated cost function is called a snake energy. Snakes have become popular because it is possible for the user to interact with them, not only when specifying their initial position, but also during the segmentation process. This is often achieved by allowing the user to specify anchor points the curve or surface should go through. In this talk we will show a framework for the design of 2D and 3D snakes that are parameterized by a set of control points. We will mainly discuss the importance of a good parameterization and its impact in the computational performance of the final algorithm. From the practical side, awe will present a user interface for ICY that features numerous possibilities for user interaction through a mouse-based manipulation of control points in synchronized 2D and 3D views. High-quality data rendering is performed thanks to VTK. Moreover, the snake surface can be overlaid to the original data during the optimization process.
Operator-Like Wavelets
John Paul Ward, EPFL STI LIB
Seminar • 23 January 2012 • 00173
Abstract: In this talk, we propose an innovation model based on a stochastic differential equation. The two defining components of the model are a sparse white noise and a shift-invariant pseudo-differential operator. Within this framework, we construct wavelets that act like the operator so that the sparsity of the noise is transferred to the wavelet coefficients. A description of the construction as well as approximation properties of the wavelets will be discussed. Importantly, each of these properties is determined by conditions on the underlying operator.
A Hessian Schatten-Norm Regularization Approach For Solving Linear Inverse Problems
Stamatis Lefkimmiatis, EPFL STI LIB
Seminar • 20 February 2012 • 00174
Abstract: In this presentation I will discuss about a class of convex non-quadratic regularizers that can be employed for solving ill-posed linear inverse imaging problems. These regularizers involve the Schatten norms of the Hessian matrix, which is computed at every pixel of the image, and they share many similarities with the total-variation (TV) semi-norm, in the sense that they both satisfy the same geometric invariance properties with respect to transformations of the coordinate system and they both depend on differential operators acting on the image. However, their advantage over TV is that by capturing second-order information of the image, they can deal more effectively with the well-known staircase effect, which is a common artifact met in TV-based reconstructions. Furthermore, a general first-order gradient-based optimization algorithm for the constrained minimization of the corresponding objective functions will be presented. The proposed algorithm is based on a primal-dual formulation and can efficiently cope with the non-smooth nature and the high-dimensionality of the problem under study.
Stochastic Models and Techniques for Sparse Signals
Arash Amini, EPFL STI LIB
Seminar • 19 March 2012 • BM 4.233 • 00175
Abstract: Stochastic models are currently the main tools for interpreting unpredictable parts in physical phenomena. To better cope with the real world data, it is recently shown that it is possible to include the sparsity/compressibility property in the conventional stochastic models by considering various innovation models. In this talk, after a brief review of the new model, some results about denoising and interpolation of such processes will be covered. The results are obtained by employing the spline theory tools in stochastic processes and using characteristic forms in estimation problems.
Generalized Total Variation Denoising via Augmented Lagrangian Cycle Spinning with Haar Wavelets
Ulugbek Kamilov, EPFL STI LIB
Seminar • 12 March 2012 • 00176
Abstract: We consider the denoising of signals and images using regularized least-squares method. In particular, we propose a simple minimization algorithm for regularizers that are functions of the discrete gradient. By exploiting the connection of the discrete gradient with the Haar-wavelet transform, the n-dimensional vector minimization can be decoupled into n scalar minimizations. The proposed method can efficiently solve total-variation (TV) denoising by iteratively shrinking shifted Haar-wavelet transforms. Furthermore, the decoupling naturally lends itself to extensions beyond l1 regularizers.
A flight over innovation models
Pouya Dehghani Tafti, EPFL STI LIB
Seminar • 02 April 2012 • BM 4.233 • 00177
Abstract: I give a brief overview of innovation models in their different shapes and forms, and will hopefully also discuss some extensions and potential applications of our previous work in this connection.
Estimating the MMSE of Estimation of Discrete AR and MA Stochastic Processes from Their Noisy Version Using PoSt moDERn Mathematics
Pedram Pad, EPFL STI LIB
Seminar • 07 May 2012 • 00178
Abstract: Finding the MMSE of estimating a Stochastic Process from its noisy version is a very basic problem in any field of signal processing. But, up to the best of my knowledge, it has not been solved except for special cases like Gaussian or i.i.d. processes. In this talk I'm going to present our works about this problem for the general case of AR and MA processes with arbitrary input distribution. Large Deviations Theory, Random Matrix Theory, Replica Theory and some basic definitions of Information Theory with a pool of conjectures!!! are our tools for approaching the problem. The results of this research could contribute in the fields of signal processing, coding of sources/channels with memory and also statistical mechanics of non-iid spin glasses (if any exists).
Edge-Preserving Smoothers
Philippe Thévenaz, EPFL STI LIB
Seminar • 09 January 2012 • BM 4.233 • 00179
Abstract: After a short review of a few classical edge-preserving smoothers, we focus on a specific one called the bilateral filter, which has attracted the attention of practitioners due to its versatility. Unfortunately, its brute-force implementation is taxed by a severe computational cost, but several authors were able to overcome this drawback by proposing accelerated versions. The ingenuity of these solutions warrants their presentation. Our own contribution is yet another fast edge-preserving smoother. Contrarily to the previous ones, its construction does not proceed from an attempt at accelerating the bilateral filter. Notwithstanding, we demonstrate that our filter has formal links with it. Our cost per pixel is constant and depends neither on the data nor on the filter parameters, not even on the degree of smoothing. In Java, our performance is better than 25 frames per second for an image of size 512x512.
Estimation from 1-bit-quantized linear measurements with applications to compressive sensing
Aurélien Bourquard, EPFL STI LIB
Seminar • 21 May 2012 • 00180
Abstract: This talk addresses the topic of signal estimation from 1-bit-quantized linear measurements. We first present a generalized forward model from the perspective of 1-bit compressive sensing. In a second step, we propose a practical algorithm for the recovery of signals based on generalized approximate message passing (GAMP). Experimental results demonstrate that our technique greatly improves the solution quality compared to conventional 1-bit compressive sensing.
Super-resolution ...
Junhong Min, KAIST, South Korea
Seminar • 18 June 2012 • 00181
A wavelet tour of odd symmetry detection
Zsuzsanna Püspöki, EPFL STI LIB
Seminar • 12 July 2012 • 00182
Abstract: Although the detection of edges and various types of keypoints is a well examined and described area, multi scale detection of symmetry centers, especially for odd orders, poses a challenge in pattern analysis. Our goal is to to explore the area of odd symmetry detectors based on wavelets and provide an optimal solution to the problem. During the presentation, I review the steerable wavelets and the construction of the 2-D polar separable steerable pyramid. Then I propose different methods to develop an M-fold symmetry detector wavelet. I also investigate the question of steering the detector to the optimal angle. Finally, I illustrate the advantages of our design for pattern analysis in simulated and real microscopical images.
Improving the Performance of Splitting-Based Algorithms for Linear Inverse Problems
Emrah Bostan, EPFL STI LIB
Seminar • 23 July 2012 • BM 4.233 • 00183
Abstract: Splitting-based algorithms are commonly used for linear inverse problems as they take advantage of the separable nature of these problems. Although, this kind of algorithms are very efficient for a wide range of inverse problems, they suffer from being sensitive to parameter selection. In this talk, we will briefly review splitting-based algorithms, explain their drawbacks and propose different methods to increase their robustness. Further, we will compare the performance of these algorithms with other methods for high dimensional nonconvex problems.
Ellipse-reproducing snakes -- Variation on a theme
Dr Cédric Vonesch, EPFL STI LIB
Seminar • 20 August 2012 • BM 4.233 • 00184
Abstract: Inspired by Delgado et al.'s work, we explore parametric representations of ellipses that are both minimal (i.e., involving 5 real parameters) and linear with respect to a fixed family of basis functions. We first show that it is impossible to obtain such a parametrization when the basis is constrained to be shift-invariant. As a work-around we construct a "shift-covariant" basis that does allow for a minimal and linear parametrization. By shift-covariant we mean that two elements of the basis are related through an integer shift and a multiplication by a complex number. Furthermore the basis elements are compactly supported and refinable. As a proof-of-concept we use this representation for constructing a 3D parametric model of a mitotic cell. Ultimately the goal is to create a model that can serve as phantom data for evaluating various image-processing algorithms in a biological context.
Wavelet Frames
John Paul Ward, EPFL STI LIB
Seminar • 27 August 2012 • 00185
Abstract: The primary focus of this talk will be tight wavelet frames of $L_2(\mathbb{R}^d)$. Basic properties of frames and tight wavelet frames will be covered, before introducing a new wavelet frame construction in dimensions $d \geq 2$. It will be shown how initial framesbased on an isotropic mother waveletcan be transformed into new frames using Fourier multipliers. Examples of the resulting wavelets will be provided to illustrate the construction.
Approximate Message Passing with Consistent Parameter Estimation
Ulugbek Kamilov, EPFL STI LIB
Seminar • 01 October 2012 • BM 4.233 • 00186
Abstract: Approximate Message Passing (AMP) is a recently developed method for statistical estimation of a signal from linear measurements. The method yield good results when the prior and noise distributions are known exactly. In this talk, we present Adaptive GAMP, which is a generalization of standard AMP when the signal and noise distributions have parametric uncertainties. The talk will cover the algorithm, its analysis, and potential applications. The talk is based on joint works with Emrah Bostan, Aurelien Bourquard, Alyson Fletcher, Sundeep Rangan, and Michael Unser.
Exponential B-splines and the design of active contours and surfaces for biomedical image analysis
Ricard Delgado-Gonzalo, EPFL STI LIB
Seminar • 24 September 2012 • BM 4.233 • 00187
Abstract: Snakes are effective tools for image segmentation. Within a 2D image, a snake is a 1D curve that evolves from an initial position, which is usually specified by a user, toward the boundary of an object. Within a 3D image, a snake is represented by a 2D surface. In the literature, these methods are also known as active contours or active surfaces. Research has been fruitful in this area, and many snake variants have emerged. Among them, we are interested in the spline-based kind, where the curve or the surface is described continuously by some coefficients (a.k.a. control points) using basis functions. These snakes have become popular because it is possible for the user to interact with them, not only when specifying their initial position, but also during the segmentation process. This is often achieved by allowing the user to specify anchor points the curve or surface should go through. Our interest is to characterize the spline-like integer-shift-invariant bases involved in the design of this kind of snakes. We prove that any compact-support function that reproduces a subspace of the exponential polynomials can be expressed as the convolution of an exponential B-spline with a compact-support distribution. As a direct consequence of this factorization theorem, we show that the minimal-support basis functions of that subspace are linear combinations of derivatives of exponential B-splines. These minimalsupport basis functions form a natural multiscale hierarchy, which we utilize to design fast multiresolution algorithms and subdivision schemes for the representation of closed geometric curves and surfaces. This makes them attractive from a computational point of view. Finally, we illustrate our scheme by building efficient active contours and surfaces capable of exactly reproducing ellipses in 2D and ellipsoids in 3D irrespective of their position and orientation.
More Info ...Tracking of Intercellular Objects in Live Cells with Visible and Invisible Bodies
Dmitry Sorokin, Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
Seminar • 25 September 2012 • BM 4.233 • 00188
Abstract: The problem of motion analysis of single protein foci in live cells became very actual last decade. However, despite the fact that many research groups work on development of automated frameworks for tracking the intercellular particles and compensation of cell global motion, the problem has no universal solution. Three basic steps of most of single particle tracking algorithms can be formulated as: global cell motion compensation, particle detection and finding of interframe correspondences of detected particles to build their trajectories. There is no strict order of these three steps (in some algorithms some of them are even merged). But the cornerstone within these steps is global cell motion compensation or registration. The choice of the order and algorithms for each step is highly dependent on the data. In this talk the cell image registration approaches for the image sequences with visible and invisible cell bodies are discussed. In both cases the cell is registered to the state of the first frame using superposition of deformation fields between all consecutive frames. The registration algorithm for cells with visible bodies is based on matching of cell body contours. For the case of non-visible cell bodies the rigid and non-rigid algorithms are discussed. The rigid algorithm is based on matching the point sets defined by the centroids of detected particles. In the non-rigid algorithm the global motion deformation field is defined from spatiotemporal analysis of particles movement. The validation scheme of the algorithms based on biological ground truth and simulated data is discussed.
How to simulate an infinite divisible random variable
Arash Amini, EPFL STI LIB
Seminar • 15 October 2012 • BM 4.233 • 00189
Abstract: A generic method for simulating arbitrary random variables is to apply the well-known CDF technique. However, it requires a closed form expression or at least a good numerical approximation of the CDF. For infinite divisible laws, we are usually provided with a closed form expression of the characteristic form. Therefore, we need to numerically implement the inverse Fourier transform followed by an integration in order to apply the CDF technique. Due to the involved roundings, this approach may even change the tail decay of the random variable. In this talk, I will present alternative methods which are better suited for infinite divisible distributions. In particular, I will explain rejection and sub-ordinate methods.
A Sampling Theory for Mobile Sensing
Jayakrishnan Unnikrishnan, EPFL - LCAV
Seminar • 29 October 2012 • BM 4.233 • 00190
Abstract: Consider the problem of sampling a bandlimited spatial field using mobile sensors. Classical sampling theory commonly uses the spatial density of samples as the performance metric of a sampling scheme. We argue that in the mobile sensing paradigm, a more relevant metric is the path density, or total distance traveled by the sensors per unit spatial volume. We introduce the problem of designing sampling trajectories with minimal path density subject to the constraint that bandlimited spatial fields can be perfectly reconstructed using samples taken on these trajectories. We obtain partial solutions to this problem from certain restricted classes of trajectories. Our results for trajectories can be generalized to results for higher dimensional sampling manifolds. In the last part of the talk we demonstrate the possibility of performing spatial anti-aliasing by simultaneously using mobile sensing and time domain filtering. Our results have applications in environment monitoring with mobile sensors, and also in designing scanning schemes for MRI.
Towards the Optimal Representations of Sparse Stochastic Processes
Pedram Pad, EPFL STI LIB
Seminar • 12 November 2012 • BM 4.233 • 00191
Abstract: We have studied first-order systems driven by the α-stable white noises. For these processes, we have proved that the Haar-type Wavelet Transform (HWT), produces less dependent coefficients than the Fourier transform. In addition, we have observed that for very sparse signals (α < 1), the HWT performs as well as the optimal transform. To evaluate the quality of a transform, we use the Kullback-Leibler Divergence (KLD) and a Stein-based criterion as a measures of the independence of the coefficients. The Stein-based criterion relates to denoising signals embedded in Additive White Gaussian Noise (AWGN). This result is surprising, since we know that the Fourier transform is optimal for decoupling Gaussian processes (it is asymptotically equivalent to the Karhunen-Loeve Transform (KLT)). Also, despite the wide usage of wavelets, this is one of the few results on the optimality of wavelets, especially within a stochastic framework.
Adaptive Interpolation of Biomedical Images Based on a Continuous-Domain Stochastic Model
Aurélien Bourquard, EPFL STI LIB
Seminar • 03 December 2012 • BM 4.233 • 00192
A Convex Regularization Framework for Imaging Applications using the Structure Tensor
Stamatis Lefkimmiatis, EPFL STI LIB
Seminar • 17 December 2012 • BM 4.233 • 00193
2011
EOG-based Interface for Eye-controlled Applications
Tobias Wissel, University of Essex, UK
Seminar • 10 January 2011 • BM 4.233 • 00144
Abstract: In recent years diverse modalities as well as combinations of them havebeen explored for applications of Human-Machine Interfaces (HMIs). In this context particularly Electrooculography (EOG) has been used due to its simplicity and good performance. The talk aims at a consideration and independent evaluation of different approaches for feature extraction involving time and frequencydomain as well as classification methods in terms of their applicability in an online system. A virtual keyboard as an example front-end in the machine environment is presented.
Signal Inpainting
Ayush Bhandhari, BIG
Test Run • 10 January 2011 • 00145
Abstract: Consider a signal which is a linear combination of K-complex exponentials. Unlike most practical setups, where one can access the samples (uniform or non-uniform) of this signal, we assume that we can only access samples oversome finite unions of intervals where the signal is non-vanishing. For all remaining intervals, we assume the signal is overdriven. With these assumptions on the signal model, we propose an empirical approach to resolve the frequencies of the complex exponentials.
Stochastic Models for Sparse and Piecewise-Smooth Signals
Michael Unser, BIG
Seminar • 24 January 2011 • 00147
Abstract: We introduce an extended family of continuous-domain stochastic models for sparse, piecewise-smooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; this is analogous conceptually to the classical interpretation of a Gaussian stationary process as filtered white noise. The non-standard aspect is that the models are driven by non-Gaussian noise (impulsive Poisson or alpha-stable) and that the class of admissible whitening operators is considerably larger than what is allowed in the conventional theory of stationary processes. We provide a complete distributional characterization of these processes. We also introduce signals that are the non-Gaussian (sparse) counterpart of fractional Brownian motion; they are non-stationary and have the same $1/\omega$-type spectral signature. We prove that our generalized processes have a sparse representation in a wavelet-like basis subject to some mild matching condition. Finally, we discuss implications for sampling and sparse signal recovery.
Locally Steered Wavelets for Image Denoising
Chiara Olivieri, University of Genova, Italy
Seminar • 21 February 2011 • BM 4.233 • 00148
Abstract: Image denoising in the wavelet domain is one of the most addressed problem in image processing in the last twenty years, with thousands of papers, from the seminal work of Donoho to the more recent GSM based approaches. We propose a novel approach for wavelet-based image denoising and we try to prove that an adaptive steered version of our wavelets can get better result than the non-adaptive ones. In line with the majority of the works on wavelet denoising, we simplify our setting to the assumption of white Gaussian noise. We start defining the Riesz transform, its properties and its use in conjunction with bandlimited wavelet frames. The resulting filters are steered according to the estimation of the local orientation we obtain from the Monogenic analysis of the image. We finally show the effect of steering in the simple case of soft-thresholding and in the more sophisticated SURE-LET.
Radial Basis Function Approximation on R^d
John Paul Ward, Houston Univ., Texas, USA
Seminar • 21 February 2011 • 00149
Abstract: In approximation theory, two important types of estimates are direct theorems and inverse theorems. The former bound the error of an approximation method, while the latter are used to classify functions based on approximation rates. Both results are equally important, and when combined, they can be used to characterize smoothness spaces in terms of an approximation procedure. This talk will cover both types of theorems applied to radial basis function (RBF) approximation on $\mathbb{R}^d$. Specifically, we will examine a general method for finding $L^p$ error estimates for approximation by RBFs that are ``close'' to Green's functions, and we will apply this method to find rates for some popular RBFs. This will be followed by a derivation of inverse estimates for RBFs with finite smoothness.
Maximum-likelihood identification of sampled Gaussian processes
Hagai Kirshner, EPFL STI LIB
Seminar • 11 April 2011 • 00150
Abstract: This work considers sampled data of continuous-domain Gaussian processes. We derive a maximum-likelihood estimator for identifying autoregressive moving average parameters while incorporating the sampling process into the problem formulation. The proposed identification approach introduces exponential models for both the continuous and the sampled processes. We construct a likelihood function from a digitally-filtered version of the available data which is asymptotically exact. This function has several local minima that originatefrom aliasing, plus a global minimum that corresponds to the maximum-likelihood estimator. We further compare the performance of the proposed algorithm with other currently available methods.
Parametric active contours and the BIG snake family
Ricard Delgado Gonzalo, EPFL STI LIB
Seminar • 16 May 2011 • BM 4.233 • 00151
Abstract: Active contours, and snakes in particular, are effective tools for image segmentation. Within an image, an active contour is a curve that evolves from an initial position, which is usually specified by a user, toward the boundary of an object. The evolution of the curve is formulated as a minimization problem. Snakes have become popular because it is possible for the user to interact with them, not only when specifying its initial position, but alsoduring the segmentation process. In this talk, we will revisit the framework of parametric snakes, showing how to design them for specific applications. We will pay special attention to the spline-based ones and show recent optimality results. Some work-in-progress will be shown in order to provide an idea of the current research challenges.
Second-Order L1-Norm Regularization for Image Restoration with Biomedical Applications
Stamatis Lefkimmiatis, EPFL STI LIB
Seminar • 02 May 2011 • 00152
Abstract: This work considers a second-order L1-norm regularization method that can be effectively used for image-restoration problems, in a variational framework. The new regularization term relies on the spectral norm of the Hessian operator and is well-suited for the restoration of a rich class of images that comprises more than merely piecewise-constant functions. We show that the proposed regularizer retains some of the most favorable properties of TV, namely, convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop efficient minimization schemes of our complete objective function. The effectiveness of the proposed regularizer is validated through deblurring experiments under additive Gaussian noise on standard and biomedical images.
Non-Linear Reconstruction Algorithms for Magnetic Resonance Imaging
Matthieu Guerquin-Kern, EPFL STI LIB
Seminar • 14 February 2011 • 00153
Abstract: We start by recalling the challenges inherent to undersampled acquisitions in MRI. After briefly describing the inverse problem approach to image reconstruction, we present our two main directions of research: 1) the development of an analytical phantom in order to validate reconstruction algorithms with reliable simulations, 2) the design of non-linear reconstruction algorithms and investigation of their performance. Specifically, we propose algorithms that are tailored to the MRI reconstruction problem in order to speedup the reconstruction process. We present results on both simulated and scanner data and we show that non-linear reconstruction can clearly outperform standa
Compressive Sensing, Quantization, and Message Passing
Ulugbek Kamilov, EPFL STI LIB
Seminar • 20 June 2011 • BM 4.233 • 00154
Abstract: Message-passing algorithms on graphical models proved to be effective in several estimation problems. In this talk we present Generalized Approximate Message Passing (GAMP) algorithm which was recently developed for compressive sensing estimation with arbitrary noisy distributions. The algorithm can be analyzed using state evolution recursion, which can predict error performance of the GAMP. We present application of this algorithm for compressive sensing estimation with quantized measurements and demonstrate its superior performance to other standard algorithms via numerical simulations.
Generalised sampling in Hilbert spaces
Ben Adcock, (Department of Mathematics, Simon Fraser University, Canada)
Seminar • 20 June 2011 • MEB 10, EPFL • 00155
Abstract: The purpose of this talk is to present a new framework for the problem of reconstructing an object - a signal or image, for example - in an arbitrary basis from its measurements with respect to certain sampling vectors. Unlike more conventional approaches, such as consistent reconstructions, this method is both numerically stable and guaranteed to converge as the number of samples increases. Moreover, the accuracy of the reconstruction is determined solely by the reconstruction space istelf, and not by the nature of the sampling. The key ingredient in this framework is oversampling. By allowing the number of samples to be greater than the number of degrees of freedom in the reconstruction, one obtains both numerical stability and convergence. In addition, the amount of oversampling required can be determined by a quantity known as the stable sampling rate. This quantity is easily computable, and therefore stability and convergence can both be guaranteed a priori. The ideas for generalised sampling stem from considerations about how to discretise certain infinite-dimensional operators. In the final part of this talk I shall describe this matter in more detail, and briefly discuss applications to several related problems, including so-called infinite-dimensional compressed sensing. This is joint work with Anders Hansen (DAMTP, University of Cambridge)
On designing fringe demodulators using the Riesz transform
Prof. Chandra Sekhar Seelamantula, Department of Electrical Engineering , Indian Institute of Science, Bangalore
Seminar • 11 July 2011 • BM 4.233 • 00156
Stochastic Sparse Processes: Some results
Arash Amini, EPFL STI LIB
Seminar • 25 July 2011 • BM 4.233 • 00157
Self-Similar Vector Fields
Pouya Dehghani Tafti, EPFL STI LIB
Seminar • 22 August 2011 • BM 4.233 • 00158
Abstract: We propose statistically self-similar and rotation-invariant models for vector fields, study some of the more significant properties of these models, and suggest algorithms and methods for reconstructing vector fields from numerical observations, using the same notions of self-similarity and invariance that give rise to our stochastic models. We illustrate the efficacy of the proposed schemes by applying them to the problems of denoising synthetic flow phantoms and enhancing flow-sensitive magnetic resonance imaging (MRI) of blood flow in the aorta. In constructing our models and devising our applied schemes and algorithms, we rely on two fundamental notions. The first of these, to which we refer as `innovation modelling', is the principle---applicable both analytically and synthetically---of reducing complex phenomena to combinations of simple independent components or `innovations'. The second fundamental idea is that of `invariance', which indicates that in the absence of any distinguishing factor, two equally valid models or solutions should be given equal consideration.
Innovation model : some theoretical results on the wavelet analysis of signals
Julien Fageot, ENS, Paris
Seminar • 08 August 2011 • BM 4.233 • 00159
Abstract: The innovation model introduced by P. Tafti and M. Unser make the hypothesis that the dependancies in an image are not the result of a stochastic phenomen. We present some theoretical results in the framework of wavelet analysis : we try to understand the behavior with the scale in the innovation model (Central Limit Theorem, evolution equation, ...), and the consequences for image analysis.
Mycobacteria Tracking for ImageJ
Virginie Uhlmann, EPFL STI
Seminar • 08 August 2011 • 00160
Abstract: Cell tracking is a recurrent image processing problem encountered by biologists for which no general solution exists so far. Under the joint collaboration of Prof. McKinney's microbiology lab and Prof. Unser biomedical imaging group, we worked on developing an ImageJ plugin aiming to automate the cell tracking and segmentation process for mycobacteria in time-lapse microscopy data. We will here highlight the challenges faced in the development of a mycobacteria tracker, describe the methods we used, and present the results we obtained.
Deconvolution of Levy Processes Using Message Passing Technique
Pedram PAD, EPFL STI LIB
Seminar • 05 September 2011 • BM 4.233 • 00161
Abstract: Using message passing techniques, we perform the MMSE estimation of a Levy process from its noisy convolved version. First we define the underlying graph model and define the exact messages that have to be passed along the edges (which are some functions). Then we try to simplify the messages to be some numbers that parameterize the functions. Up to this point we obtain a very efficient algorithm for MMSE denoising of Levy processes. To generalize the method for the deconvolution problem, we need to further simplify the method by decreasing the number of passed messages; the latter is achieved by restricting the number of messages to the number of vertices rather than the edges. This simplification usually decreases the computational complexity from O(n^2) to O(n).
Phase-contrast X ray computed tomography
Masih Nilchian, EPFL STI LIB
Seminar • 26 September 2011 • BM 4.233 • 00162
Abstract: Phase-contrast X-ray computed tomography is an important imaging method in biomedical imaging and microscopy. Presently, the reconstructions are performed using direct FBP-like approaches; e.g. the generalized filtered back projection (GFBP). In this talk, we present how to formulate the reconstruction as an inverse problem, and present some corresponding iterative algorithms. We show their performance on some test data set.
A spline framework for tomographic image reconstruction
Michael Unser, EPFL STI LIB
Seminar • 26 September 2011 • BM 4.233 • 00163
Abstract: In addition to Masih's talk, there will be a presentation on "A spline framework for tomographic image reconstruction" by myself, as a preparation for Prof. Unser's and Masih's joint visit to PSI / Villigen (tuesday 27.09.2011)
Edge-Preserving Smoothers
Philippe Thévenaz
Seminar • 09 January 2011 • BM 4.233 • 00164
Abstract: After a short review of a few classical edge-preserving smoothers, we focus on a specific one called the bilateral filter, which has attracted the attention of practitioners due to its versatility. Unfortunately, its brute-force implementation is taxed by a severe computational cost, but several authors were able to overcome this drawback by proposing accelerated versions. The ingenuity of these solutions warrants their presentation. Our own contribution is yet another fast edge-preserving smoother. Contrarily to the previous ones, its construction does not proceed from an attempt at accelerating the bilateral filter. Notwithstanding, we demonstrate that our filter has formal links with it. Our cost per pixel is constant and depends neither on the data nor on the filter parameters, not even on the degree of smoothing. In Java, our performance is better than 25 frames per second for an image of size 512x512.
pas reçu
Jean-Charles Baritaux and Julien Fageot
Seminar • 31 October 2011 • BM 4.233 • 00165
Abstract: pas reçu
Iterative Segmentation of Autofluorescence Images with Background Correction
Ramtin Madani
Seminar • 10 October 2011 • BM 4.233 • 00166
Abstract: We propose an automated method for high-quality segmentation of linear backgrounds in noisy fluorescence-microscopy data. In our approach, the segmented domains of the input image are determined through graph cuts. In order to properly compensate for the varying background intensities, this estimation is performed iteratively, using interpolated versions of the current image decomposition as input reference. Each interpolation task involves the minimization a weighted smoothing-spline functional, which is done using an efficient multigrid approach. The performance of our resulting technique as compared to the state of the art is assessed through synthetic and real experime
not received
Jean-Charles Baritaux and Julien Fageot
Seminar • 01 January 2011 • BM 4.233 • 00167
Abstract: not received
à venir
Aurélien Bourquard
Seminar • 14 November 2011 • BM 4.233 • 00168
Abstract: à venir, publication en cours
Imaging Inverse Problems and Sparse Stochastic Modeling
Emrah Bostan
Seminar • 12 December 2011 • BM 4.233 • 00169
Abstract: This talk considers deriving a family of MAP estimators, based on the theory of continuous-domain sparse stochastic processes introduced by Unser et al., for inverse problems occur in imaging. The family includes potential functions that are typically nonconvex in addition to the traditional methods of Tikhonov and total-variation (TV) regularization. We also derive an algorithmic scheme for handling nonconvex problems. Further, we compare the reconstruction performance of different estimators for the problem of MR image reconstruction.
3D PSF Models for Fluorescence Microscopy: Implementation and Super-resolution Applications
Hagai Kirshner
Seminar • 28 November 2011 • BM 4.233 • 00170
Abstract: I will introduce realistic 3D PSF (Point Spread Function) models for the wide-field and the confocal microscopes, and will show their potential use in super-resolution fluorescence microscopy. The presented work consists of model formulation by means of readily available microscopic parameters, and of a fast and accurate open source implementation in ImageJ. I will demonstrate the usefulness of the proposed models in point-source localization and in super-resolution data simulation. The proposed PSF-based approach is complementary to currently available algorithms that rely on simplified PSF models. The latter can be used for obtaining preliminary results and for having an immediate feedback about the quality of the experiment, while the more realistic models can be used for a more accurate analysis that will be performed at a later stage. Joint work with François Aguet and Daniel Sage.
ISBI practice
Seminar • 21 March 2011 • 00171
2010
Scalable Compression of 3D Medical Images with Optimized Volume of Interest Coding
Victor Sanchez, The University of British Columbia, Vancouver, Canada
Seminar • 08 March 2010 • CO017 • 00125
Abstract: Volumetric medical images, such as magnetic resonance imaging (MRI) and computed tomography (CT) sequences, are becoming a standard in healthcare systems and an integral part of a patients medical record. Due to the vast amount of resources needed for archival and communication of these 3D data, it has become essential to employ compression methods that provide lossless reconstruction, random access, and scalability by quality and resolution. Lossless reconstruction is especially important to avoid any loss of valuable clinical data, which may result in medical and legal implications. Random access and scalability, on the other hand, are especially important in telemedicine applications, where clients with limited bandwidth using a remote image retrieval system may connect to a central server to access a specific region of a 3D medical image, i.e., a volume of interest (VOI), at different qualities and resolutions. In this presentation, we introduce a novel 3D medical image compression method with a) scalability properties, by quality and resolution up to lossless reconstruction; and b) optimized VOI coding capabilities at any bit-rate. We are particularly interested in interactive telemedicine applications, where a VOI is usually transmitted from a central server to a client at the highest quality possible, preferably in conjunction with a low quality version of the background, which is important in a contextual sense to help the client observe the position of the VOI within the original 3D image. We will present the coding techniques employed by our proposed method, which include a 3D integer wavelet transform, embedded block coding with optimized truncation and 3D contexts, a bit-stream reordering procedure, and a VOI coding optimization technique. We will also demonstrate that the proposed method achieves a better coding performance, in terms of the peak signal-to-noise ratio, than that achieved by the two state-of-the-art region of interest coding methods adopted by the JPEG2000 standard, MAXSHIFT and general scaling-based methods.
Wiener's Lemma and Sampling
Qiyu Sun, University of Central Florida, Orlando, USA
Seminar • 16 March 2010 • BM 4.233 • 00126
Abstract: In this talk, I will discuss the Wiener's lemma for infinite matrices and its applications to non-uniform sampling problems.
A Priori Guided Reconstruction for FDOT using Mixed Norms
Jean-Charles Baritaux, BIG
Test Run • 26 March 2010 • BM 4.233 • 00127
Multi-Target Tracking of Packed Yeast Cells
Ricard Delgado-Gonzalo, BIG
Test Run • 26 March 2010 • BM 4.233 • 00128
Analytical Form of Shepp-Logan Phantom for Parallel MRI
Matthieu Guerquin-Kern, BIG
Test Run • 26 March 2010 • BM 4.233 • 00129
Fast Detection of Cells using a Continiously Scalable Mexican-Hat-Like Template
Kunal Narayan Chaudhury
Test Run • 26 March 2010 • BM 4.233 • 00130
Fractal Modelling and Analysis of Flow-Field Images
Pouya Tafti, BIG
Test Run • 26 March 2010 • BM 4.233 • 00131
Recent advances in biomedical imaging and signal analysis
M. Unser, Biomedical Imaging Group, EPFL, 1015 Lausanne, Switzerland
Seminar • 13 August 2010 • BM 4.233 • 00132
Abstract: Wavelets have the remarkable property of providing sparse representations of a wide variety of "natural" images. They have been applied successfully to biomedical image analysis and processing since the early 1990s. In the first part of this talk, we explain how one can exploit the sparsifying property of wavelets to design more effective algorithms for image denoising and reconstruction, both in terms of quality and computational performance. This is achieved within a variational framework by imposing some ℓ1-type regularization in the wavelet domain, which favors sparse solutions. We discuss some corresponding iterative skrinkage-thresholding algorithms (ISTA) for sparse signal recovery and introduce a multi-level variant for greater computational efficiency. We illustrate the method with two concrete imaging examples: the deconvolution of 3-D fluorescence micrographs, and the reconstruction of magnetic resonance images from arbitrary (non-uniform) k-space trajectories. In the second part, we show how to design new wavelet bases that are better matched to the directional characteristics of images. We introduce a general operator-based framework for the construction of steerable wavelets in any number of dimensions. This approach gives access to a broad class of steerable wavelets that are self-reversible and linearly parameterized by a matrix of shaping coefficients; it extends upon Simoncelli's steerable pyramid by providing much greater wavelet diversity. The basic version of the transform (higher-order Riesz wavelets) extracts the partial derivatives of order N of the signal (e.g., gradient or Hessian). We also introduce a signal-adapted design, which yields a PCA-like tight wavelet frame. We illustrate the capabilities of these new steerable wavelets for image analysis and processing (denoising).
An ALPS view of Compressive Sensing
V. Cevher, EPFL, Laboratoire de systèmes d'information et d'inférence
Seminar • 13 August 2010 • 00133
Abstract: Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for acquisition of sparse or compressible signals that can be well approximated by just K<< N elements from an N-dimensional basis. Instead of taking periodic samples, we measure inner products with M≤N random vectors and then recover the signal via a sparsity-seeking optimization or greedy algorithm. The standard CS theory dictates that robust signal recovery is possible from M=O(K log(N/K)) measurements. The implications are promising for many applications and enable the design of new kinds of analog-to-digital converters, cameras and imaging systems, and sensor networks. In this talk, we introduce three first-order, iterative CS recovery algorithms, collectively dubbed algebraic pursuits (ALPS), and derive their theoretical convergence and estimation guarantees. We empirically demonstrate that ALPS outperforms the Donoho-Tanner phase transition bounds for sparse recovery using Gaussian, Fourier, and sparse measurement matrices. We then describe how to use ALPS for CS recovery in redundant dictionaries. Finally, we discuss how ALPS can also incorporate union-of-subspaces-based sparsity models in recovery with provable guarantees to make CS better, stronger, and faster.
More Info ...An overview of image processing questions in optical microscopy field
Alessandra Griffa, EPFL Biop
Seminar • 27 August 2010 • BM 4.233 • 00134
Fast stereo-matching using translation-invariant wavelet pyramid
Zsuzsanna Püspöki, BIG
Seminar • 30 August 2010 • BM 4.233 • 00135
Abstract: We address the problem of disparity estimation from two-frame stereo images. We begin with a brief overview of the state-of-the-art stereo algorithms, particularly the ones based on graph optimization. With this class, we discuss dynamic programming in details. Graph-based algorithms tend to be rather slow when both the stereo-images and the disparities are large. We propose a "coarse-to-fine" algorithm that does narrow-band dynamic programming on wavelet pyramids to reduce the computation time for such images. The key feature of our pyramid is that it provides significant translation-invariance at the cost of moderate redundancy. We evaluate our algorithm on the benchmark Middlebury database [vision.middlebury.edu/stereo]. Finally, we present some experiment results to illustrate the advantages of the method.
From sample probabilities to random processes: Life as a (somewhat frustrated) measure non-theorist
Pouya Dehghani Tafti, BIG
Seminar • 25 October 2010 • BM 4.233 • 00136
Abstract: We shall summarily discuss ways to relate finite-dimensional probability distributions to probability measures on function spaces (i.e. the stochastic law of random processes) and overview some of the shortcomings of the theory of measure and topology in this context. We shall offer no immediate remedy for these shortcomings. Cookies will be available to alleviate your suffering.
Advanced Bilateral Filter
Kunal Chaudhury, BIG
Seminar • 08 November 2010 • BM 4.233 • 00137
Wavelet steerability: from 2d to 3d.
Nicolas Chenouard, BIG
Seminar • 27 September 2010 • BM 4.233 • 00138
Abstract: In this presentation a short overview of our latest work on 2d steerable wavelets will first be given. This topic includes monogenic wavelet analysis, generalized Riesz wavelets, wavelet learning and image denoising. We will then focus more extensively on the transfer of these methods to the 3D microscopy world. While maths elegantly fit in n dimensions, we will show that going from Lena to 3D biological images is (unfortunately) not that straightforward. Some current solutions will be presented, while a special emphasis will be put on the issues which are still to be solved and on the possible applications of 3D steerable wavelets in bioimaging.
Accelerated Wavelet-Regularized Deconvolution For 3D Fluorescence Microscopy
Raquel Terres-Cristofani, BIG
Seminar • 13 September 2010 • BM 4.233 • 00139
Abstract: Modern deconvolution algorithms are often specified as minimization problems involving a non-quadratic regularization functional. When the latter is a wavelet-domain l1-norm that favors sparse solutions, the problem can be solved by a simple iterative shrinkage/thresholding algorithm (ISTA). This approach provides state-of-the-art results in 2-D, but is harder to deploy in 3-D because of its slow convergence. In this paper, we propose an acceleration scheme that turns wavelet-regularized deconvolution into a competitive solution for 3-D fluorescence microscopy. A significant speed-up is achieved though a synergistic combination of subband-adapted thresholds and sequential TwIST updates. We provide a theoretical justification of the procedure together with an experimental evaluation, including the application to real 3-D fluorescence data.
Reconstruction Approaches For 1-Bit Compressed Sensing and Sparse Interpolations
Aurélien Bourquard, BIG
Seminar • 13 September 2010 • 00140
Abstract: The first part of this talk addresses image acquisition and reconstruction in the framework of compressed sensing and 1-bit quantization. First, a suitable theoretical acquisition model is proposed. Then, a large-scale reconstruction algorithm that exploits bound-optimization concepts, which is the central contribution, is presented. The overall approach is finally placed in an more practical context, considering an optical device that performs several parallel acquisitions. The second part addresses image interpolation from a given subset of non-ideal samples. Based on the generalized-sampling theory, the corresponding problem is first expressed in the continuous domain. In order to make the problem well-posed, an anisotropic regularization approach is proposed. Structurally, the reconstruction algorithm is shown to be akin to IRLS methods. The multilevel resolution strategy is described, and subsequent experiments are shown and discussed.
Bayesian Estimation for Continous-Time Sparse Stochastic Processes
Arash Amini, BIG and Sharif Univ. of Technology, Tehran, Iran
Seminar • 11 October 2010 • 00141
Abstract: In this talk, Arash will present a continuous-time stochastic model for the signals which have sparse representation in a transformation domain ; e.g., piecewise constant signals. Based on the characteristic forms, some joint probability distributions are derived which are useful for estimation problems such as denoising and interpolation. He will explain the case of piecewise constant signals in more detail and compare the MAP estimator with TV minimizer. He will also briefly point out some results regarding compressibility of probability distributions.
Bayesian tracking applied to time-lapse fluorescence microscopy
Ricard Delgado Gonzalo, BIG
Seminar • 22 November 2010 • 00142
Abstract: Tracking algorithms are traditionally based on either a variational approach or a Bayesian one. In the variational case, a cost function is established between two consecutive frames and minimized by standard optimization algorithms. In the Bayesian case, a stochastic motion model is used to maintain temporal consistency. Among the Bayesian methods we focus on the particle filter, which is especially suited for handling multimodal distributions. In this presentation, we present a novel approach to fuse both methodologies in a single tracker where the importance sampling of the particle filter is given implicitly by the optimization algorithm of the variational method. Our technique is capable of outlying and tracking the lineage of biological cells using different motion models for mitotic and nonmitotic stages of the life of a cell. We validate its ability to track the lineage of HeLa cells in fluorescence microscopy.
Reconstruction formulas in phase-contrast tomography
Masih Nilchian, BIG
Seminar • 06 December 2010 • 00143
The influence of wavelet function shape on the results of NMR data processing
Ladislav Valkovic, Slovak Academy of Sciences, Bratislava, Slovakia
Seminar • 06 December 2010 • 00146
Abstract: The wavelet transform (WT) has been proved in both its forms (discrete and continuous) to be a useful tool in processing NMR datasets. Discrete WT can be usedmainly for image (and spectral) filtering and contrast enhancement. The continuous WT is more suited for fMRI data analysis because of its ability of tracking signal changes in time. Although WT has been found useful and applicable, still other methods are more common. This is probably a result of not optimal setting of the WT used. Wavelet function as a basic parameter of the WT has a lot of possible shapes defined and basically a lot more are possible. The influence of the chosen function shape on the result of the WT is obvious and therefore an optimal function shape has to be defined for each application. Our first experiments were with fast low field high-resolution MRI, therefore discrete WT was used (for image filtering). Fiftydifferent function shapes along with different thresholds and WT order (1500 different options altogether) were compared. Different performance of the filter was observed as expected and according to two parameters (SNR and residual difference) an optimal setting was found. As every sequence has its own features it is expected to find different optimums for each imaging sequence. In the field of fMRI is WT used to find the signal changes and also to measure the hemo-dynamic response curve (HDR) of the activated region. We suspect that the function of the shape closest to the HDR will have the best results in activation definition. Again a variety of optimal wavelet functions is expected for different fMRI paradigms. Not even different regions, but also the same regions activated by different stimulusmight resolve in slight differences in HDR. Knowing the exact HDR is very important for future experimentsetting. We believe that with proper setting is WT going to be very precise and thus commonly used tool in fMRI data analysis.
2009
Distributed Signal Processing for Sensor Networks: Sampling and Inverse Problems
Yue M. Lu, Audio-Visual Communications Laboratory, EPFL
Seminar • 05 January 2009 • BM 5.202 • 00112
Abstract: Wireless sensor networks have profound implications on all aspects of human society, with possible applications ranging from fundamental scientific research---such as studying the effect of global warming--- to issues that we deal with in everyday life---for example, identifying the levels of our interactions in a social network. This talk presents my work on several topics in sensor network signal processing. A common theme is to develop models and algorithms that can efficiently exploit the underlying physics of the unknown signals. First, I will discuss the sampling problem, whose fundamental goal is to capture a function with a set of samples. While regular multidimensional sampling theory is a well developed field, it usually assumes homogeneity over the dimensions (as in images or volumetric data). However, in the case of physical field sampling by sensor networks, the dimensions -- space and time -- are specific and cannot be interchanged. For example, increasing the spatial sampling rate is often much more expensive than increasing the temporal sampling rate, since the former requires the physical presence of more sensors in the network, whereas the latter is, in theory, only constrained by the communication capacity and energy budget of the network. Motivated by the above issue, I will describe how to explore the fundamental trade- off between the spatial and temporal sampling densities of a sensor network based on the physical properties of the field. In the second part of the talk, I will present some of my ongoing work on developing new data processing algorithms that exploit the correlation between multiple physical sensing modalities incorporated in a single sensor network.
Advances in Cellular and Molecular Image Analysis
Erik Meijering, Erasmus MC - University Medical Center Rotterdam
Seminar • 12 January 2009 • BM 5.202 • 00113
Abstract: One of the main challenges of biomedical research in the postgenomic era is the unraveling of the molecular mechanisms of life. This is facilitated by recent advances in microscopic and molecular imaging technologies, which are having an enormous impact on the basic life sciences as well as human health care, by enabling a better understanding of disease mechanisms, the development of new biomarkers for early diagnosis, and preclinical validation of novel treatments in small-animal models as a first step towards clinical implementation. Studies into dynamic phenomena at the cellular and molecular levels generate vast amounts of spatiotemporal image data, potentially containing much more relevant information than can be analyzed by human observers. Hence there is a rapidly growing need for automated quantitative methods for the analysis of time-lapse imaging data, not only to cope with the rising rate at which images are acquired, but also to reach a higher level of sensitivity, accuracy, objectivity, and reproducibility. This presentation gives an overview of our recent advances in this area and the challenges that lie ahead.
Digital Holography and Blind Deconvolution
Ferréol Soulez, Université Jean Monnet, Saint-Etienne, France
Seminar • 16 January 2009 • BM 5.202 • 00114
Abstract: This talks presents an ''inverse problems'' approach for reconstruction in two different fields: digital holography and blind deconvolution. The "inverse problems" approach consists in investigating the causes from their effects, i.e. estimate the parameters describing a system from its observation. In general, same causes produce same effects, same effects can however have different causes. To remove ambiguities, it is necessary to introduce a priori information. In this work, the parameters are estimated using optimization methods to minimize a cost function which consists of a likelihood term plus some prior terms. After a brief description of this approach, I will present in a second part its application to digital holography for particles image velocimetry (DH-PIV). Using a model of the hologram formation, we use this "inverse problems" approach to circumvent the artifacts produced by the classical hologram restitution methods (distortions close to the image boundaries, multiple focusing, twin-images). The proposed algorithm detects micro-particles in a volume 16 times larger than the camera field of view and with a precision improved by a factor 5 compared with classical techniques. Finaly in a third part, I will show the use of this framework to address the problem of heterogeneous multidimensional data blind deconvolution. Heterogeneous means that the different dimensions have different meanings and units (for instance position and wavelength). For that, we have established a general framework with a separable prior which have been successfully adapted to different applications: deconvolution of multi-spectral data in astronomy, of Bayer color images and blind deconvolution of bio-medical video sequences (in coronarography, conventional and confocal microscopy).
From Total Variation Towards Non-Local Means: Variational and Bayesian Models for Image Denoising
Cécile Louchet, MAP5 (maths appliquées à Paris 5) de l'Université Paris Descartes
Seminar • 10 March 2009 • BM.5.202 • 00115
Abstract: The ROF (Rudin, Osher, Fatemi, 1992) model, introducing the total variation as regularizing term for image restoration, has since been dealt with intense numerical and theoretical research. In this talk we present new models inspired by the total variation but built by analogy with a much more recent method and diametrically opposed to it: the Non-Local means. A first model is obtained by transposing the ROF model into a Bayesian framework. We show that the estimator associated to a quadratic risk (posterior expectation) can be numerically computed thanks to a MCMC (Monte Carlo Markov Chain) algorithm, whose convergence is carefully controlled, considering the high dimensionality of the image space. We notably prove that the associated denoiser avoids the staircasing effect, a well-known artifact that frequently occurs in ROF denoising. In a second part we propose a neighborhood filter based on the ROF model, and analyze several aspects: stability, limiting PDE, neighborhood weighting... We show that this filter allows to remove noise while maintaining a local control over the noise.
Helmholtz meets Heisenberg: Sparse Remote Sensing
Prof. Thomas Strohmer, Department of Mathematics University of California, Davis
Seminar • 05 May 2009 • CO 017 • 00116
Abstract: We consider the problem of detecting targets via remote sensing. This imaging problem is typically plagued by nonuniqueness and instability and hence mathematically challenging. Traditional methods such as matched field processing are rather limited in the number of targets that they can reliably recover at high resolution. By utilizing sparsity and tools from compressed sensing I will present methods that significantly improve upon existing radar imaging techniques. I will derive fundamental performance and resolution limits for compressed radar imaging with respect to the number of sensors and resolvable targets. These theoretical results demonstrate the advantages as well as limitations of compressed remote sensing. Numerical simulations confirm the theoretical analysis. This is joint work with Albert Fannjiang, Mike Yan, and Matt Herman.
Wavelet Transforms with a Rational Dilation Factor
Ilker Bayram, Electrical and Computer Engineering Department Polytechnic Institute of NYU Brooklyn, New York (now with BIG)
Seminar • 22 June 2009 • BM 1.111 • 00117
Abstract: The dyadic wavelet transform is an effective tool for processing piecewise smooth signals; however, its poor frequency resolution (its low Q-factor) limits its effectiveness for processing oscillatory signals like speech, music, EEG, and vibration measurements, etc. In this talk, I will describe a more flexible family of discrete-time wavelet transforms (i.e. iterated filter banks) for which the frequency resolution can be varied. The new wavelet transform can attain higher Q-factors (desirable for processing oscillatory signals) or the same low Q-factor of the dyadic wavelet transform. The new wavelet transform is modestly overcomplete and based on rational dilations. Like the dyadic wavelet transform, it is an easily invertible 'constant-Q' discrete transform implemented using iterated filter banks and can likewise be associated with a wavelet frame for L2(R). I will also briefly talk (as time permits) about the problems I have worked on in the past, involving (i) a 'packet' extension of the dual-tree complex wavelet transform, (ii) stability of (the frame bounds of) iterated filter banks and (iii) analysis prior (relevant for wavelet, TV, etc.) regularized inverse problems.
Particle Tracking in Noisy Microscopy Images: the Multiple Hypothesis Tracking Approach
Nicolas Chenouard, Quantitative Image Analysis Group Institut Pasteur, Paris
Seminar • 20 July 2009 • BM 4.233 • 00118
Abstract: Multiple hypothesis tracking (MHT) is a preferred technique for solving the data association problem in modern multiple targets tracking systems. However in bioimaging applications, its use has long been thought impossible due to the prohibitive cost induced by the high number of objects that need to be tracked and the poor quality of images. We have proposed a new MHT formulation in which target perceivability is modeled whereby early track termination and false measurements exclusion reduce the problem complexity and improve the results robustness to clutter. Moreover, we have proposed a MHT implementation that is fast by exploiting the tree structure of the potential tracks. We have applied the method to the analysis of several sets of real microscopy images containing thousands of biological targets. By doing so we prove the benefits of the approach when tracking in very noisy environments such as low-light level fluorescent microscopy images.
Gabor Wavelet Analysis and the Fractional Hilbert Transform
Kunal Narayan Chaudhury, BIG
Test Run • 21 July 2009 • BM 4.233 • 00119
Wavelet Primal Sketch Representation using Marr Wavelet Pyramid and its Reconstruction
Dimitri Van De Ville, BIG
Test Run • 21 July 2009 • BM 4.233 • 00120
Texture Synthesis using Marr Wavelet Pyramid
Dimitri Van De Ville, BIG
Test Run • 21 July 2009 • BM 4.233 • 00121
Self-similar Random Vector Fields and their Wavelet Analysis
Pouya D. Tafti, BIG
Test Run • 21 July 2009 • BM 4.233 • 00122
Some Unexpected Uses of Total Variation Minimization
Antonin Chambolle, CMAP - Ecole Polytechnique - CNRS, Paris, France
Seminar • 08 October 2009 • MEB10 • 00123
Abstract: In this talk I will discuss a few recent work in collaboration with Daniel Cremers (Bonn) and Tom Pock (U. Graz) on the minimization of "nonconvex" problems. (We'll see that there is no miracle, though.) I will explain how one can construct simple representations to minimize reconstruction problems (in stereo, optical flow...) with a convex interaction term by minimizing a globally convex energy, in some kind of continuous variant of Ishikawa and Geiger's representation for MRFs. We will then try to extend this to truly nonconvex problems, such as the Mumford-Shah functional and the optimal partition problem.
Linear and Nonlinear Deterministic Compressed Sensing
Arash Amini, Guest PhD Student, BIG
Seminar • 01 December 2009 • BM 4.233 • 00124
Abstract: The developing field of compressed sensing which studies the sampling-reconstruction problem for the class sparse signals based on their linear projections onto spaces with lower dimension, is mainly based on the random structure of the measurements. However, for practical applications, random samplers should be replaced by deterministic methods both for the storage purposes and reconstruction procedures. Unlike the vast amount of literature in random sensing theory, deterministic approaches are hardly studied. Moreover, the current known deterministic approaches fail to achieve the predicted asymptotic bound by random measurements. In this talk, beside a brief introduction to compressed sensing and RIP condition, I will discuss some of the known deterministic sampling matrices which were part of my previous works. I will also introduce a class of nonlinear sensing functions which are more efficient considering sampling and reconstruction tasks, but are sensitive to additive noise.
2008
Flow Sensitive 4D Magnetic Resonance Imaging: Analysis of 4D Blood Flow Characteristics in the Human Vascular System
Aurélien Stalder, University Hospital Freiburg, Germany
Seminar • 17 November 2008 • BM 5.202 • 00092
Abstract: Magnetic Resonance Imaging (MRI) techniques provide a non-invasive method for the highly accurate anatomic depiction of the heart and vessels. In addition, the intrinsic sensitivity of MRI to flow, motion and diffusion offers the unique possibility to acquire spatially registered functional information simultaneously with the morphological data within a single experiment. Characterizations of the dynamic components of blood flow and cardiovascular function provide insight into normal and pathological physiology and have made considerable progress in recent years. The principles and limitations behind flow-sensitive MRI with ECG synchronization and respiratory control are briefly explained. Flow sensitive time-resolved 3D MR-Imaging using 3-directional velocity encoding for the detection and visualization of global and local blood flow characteristics in targeted vascular regions (aorta, cranial arteries, peripheral arteries) is presented. Blood flow characteristics in normal vascular geometries as well as for common pathologies were investigated using advanced computer aided data analysis which empowered the reader to take full advantage of the 4D nature (3 spatial and one temporal dimension) of the data. The comparison of vascular hemodynamics in volunteers and patients illustrates that even small pathological geometric changes such as mild aneurysms or prosthesis repair bear a major impact on local vascular hemodynamics and severely alter blood flow characteristics. Further quantification and analysis of derived vessel wall parameters may allow for a regional analysis of the impact of vascular pathologies on the vessel wall.
Ongoing Research Projects in Biomedical Signal Processing - Some Results and many Challenges
Jean-Marc Vesin, EPFL
Seminar • 10 January 2008 • BM 5.202 • 00100
Abstract: Biomedical signal processing relies more and more on advanced techniques for data pre-processing (notably interference cancellation) and clinically-relevant feature extraction. The current state of several projects in our group will be introduced. Our investigation of the mechanisms of cardiac atrial fibrillation (AF) will be presented in some detail, especially the central role of the computer model of AF that we have developed. Characterization of the gastro-intestinal tract activity using an innovative device ("magnet tracing") will be presented, and some time given to our work on the analysis of EEG and in-depth brain electrode signals. A recurrent theme in all these projects being the tracking of time-varying frequency components, the last part of the talk will be devoted to developments we pursue in this domain.
3D Tracking of Dendrites
German Gonzalez Serrano, EPFL
Seminar • 07 February 2008 • BM 5.202 • 00101
l^1 Greedy Algorithm for Finding Solutions of Underdetermined Linear Systems
A. Petukhov and I. Kozlov
Seminar • 10 March 2008 • BM 5.202 • 00102
Abstract: Finding sparse solutions of underdetermined systems Ax=b is a hot topic of the last years. Many problems of information theory (data compression, error correcting codes, compressed sensing, etc.) can be reformulated in terms of this problem. We are going to discuss different applications of this problem and to present a new algorithm solving the problem for wider range of the input data. The known algorithms are represent two competing classes: l^1 minimization and orthogonal greedy methods. Our new algorithm may serve as "a peace agreement" between the two competing approaches.
Fast No Ground Truth Image Registration Accuracy Evaluation: Comparison of Bootstrap and Hessian Approaches
Dr. Jan Kybic, Czech Technical University
Seminar • 11 April 2008 • 00103
Abstract: Image registration algorithms provide a displacement field between two images. We consider the problem of estimating accuracy of the calculated displacement field from the input images only and without assuming any specific model for the deformation. We compare two algorithms: the first is based on bootstrap resampling, the second, new method, uses an estimate of the criterion Hessian matrix. We also present a block matching strategy using multiple window sizes where the final result is obtained by fusion of partial results controlled by the accuracy estimates for the blocks involved. Both accuracy estimation methods and the new registration strategy are experimentally compared on synthetic as well as real medical ultrasound data.
More Info ...Wavefront Coding and Phase Masking Techniques
François Aguet, BIG
Seminar • 22 April 2008 • BM 4.235 • 00104
Abstract: In order to work around the inherent limitations of traditional optical systems (resolution, depth of field, aberrations, etc.), a multitude of new, hybrid approaches that combine modified optics and signal processing have been introduced in recent years. In this seminar, I will present some of the tools used to design such systems and review a selection of key contributions to the field.
Content Adaptive Model Operator for Single Photon Emission Computed Tomography
Ricard Delgado Gonzalo, Technical University of Catalonia
Seminar • 25 April 2008 • BM 5.202 • 00105
Abstract: In this presentation it is going to be shown a new methodology for full 2D and 3D calculation of a projection operator for emission tomography using the content-adaptive mesh model (CAMM) for image representation. CAMM has been shown to be a promising methodology for volumetric data representation and tomographic reconstruction. Furthermore, it provides a unified framework for tomographic reconstruction of organs that undergo non-riding deformation, e.g. heart. The CAMM is an efficient image representation based on adaptive nonuniform sampling and linear interpolation. The presented projection operator model incorporates the major data degradation models, namely object attenuation and detector/collimator spatial response referred to as distance dependent blur. The projection operator is calculated using a ray-tracing algorithm. The methodology presented here can be easily extended to transmission tomography. The derivation and implementation of the projection operator have been tested by reconstructing images obtained from a realistic data simulation. The research described establishes an important and necessary step for development of 4D (3D space + time) deformable CAMM reconstruction of organs with non-rigid motion.
Parametric B-Spline Snakes on Distance Maps-Application to Segmentation of Histology Images
Chandra Sekhar Seelamantula, BIG
Test Run • 22 August 2008 • BM 4.235 • 00106
Abstract: We construct parametric active contours (snakes) for outlining cells in histology images. These snakes are defined in terms of cubic B-spline basis functions. We use a steerable ridge detector for obtaining a reliable map of the cell boundaries. Using the contour information, we compute a distance map and specify it as one of the snake energies. To ensure smooth contours, we also introduce a regularization term that favors smooth contours. A convex combination of the two cost functions results in smooth contours that lock onto edges efficiently and consistently. Experimental results on real histology images show that the snake algorithm is robust to imperfections in the images such as broken edges.
The Monogenic Riesz-Laplace Wavelet Transform
Michael Unser, BIG
Test Run • 22 August 2008 • BM 4.235 • 00107
Abstract: We introduce a family of real and complex wavelet bases of L_2(R^2) that are directly linked to the Laplace and Riesz operators. The crucial point is that the family is closed with respect to the Riesz transform which maps a real basis into a complex one. We propose to use such a Riesz pair of wavelet transforms to specify a multiresolution monogenic signal analysis. This yields a representation where each wavelet index is associated with a local orientation, an amplitude and a phase. We derive a corresponding wavelet-domain method for estimating the underlying instantaneous frequency of the signal. We also provide a simple mechanism for improving the shift and rotation-invariance of the wavelet decomposition. We conclude the paper by presenting a concrete analysis example.
The Marr Wavelet Pyramid and Multiscale Directional Image Analysis
Dimitri Van De Ville, BIG
Test Run • 22 August 2008 • BM 4.235 • 00108
Abstract: The Marr wavelet pyramid is a wavelet decomposition that implements a multiscale version of the complex gradient-Laplace operator. It is closely linked to a multiresolution analysis of L_2(R^2) and it has a fast filterbank implementation. We show how the Marr wavelets, which are essentially steerable, can be used to extract a multiscale version of the structure tensor. This yields a multiscale characterization of an image in terms of various features such as local gradient energy, orientation, and coherency. We provide an implementation of the proposed system as a Java plug-in for ImageJ, and we illustrate its applicability to directional image analysis which is useful in domains such as biological imaging and material science.
Lego-ball B-splines, and the best Wavelet Pool ever !
Reza Shirvany, ENSEEIHT, Toulouse, France
Seminar • 03 September 2008 • BM 4.235 • 00109
On the Role of Exponential Functions in Image Interpolation
Hagai Kirshner, Department of Electrical Engineering, Technion, Israel
Seminar • 22 October 2008 • BM 5.202 • 00110
Abstract: A reproducing-kernel Hilbert space approach to image interpolation is introduced. In particular, the reproducing kernels of Sobolev spaces are shown to be exponential functions. These functions, in turn, give rise to interpolation kernels that outperform presently available methods. Both theoretical and experimental results are presented. A tight l_2 upper-bound on the interpolation error is then derived, indicating that the proposed exponential functions are optimal in this regard. Furthermore, a unified approach to image interpolation by ideal and non-ideal sampling procedures is derived and demonstrated, suggesting that the proposed exponential kernels may have a significant role in image modeling as well. Our conclusion is that the proposed Sobolev-based approach could be instrumental and a preferred alternative in many interpolation tasks.
Computer Aided Detection of Prostate Cancer Based on GDA and Predictive Deconvolution
Simona Maggio, University of Bologna
Seminar • 12 November 2008 • BM 5.202 • 00111
Abstract: A Computer-Aided Detection (CAD) scheme to support prostate cancer diagnosis based on ultrasound images is presented. The approach described in this work employs a multifeature classification model. To identify features highly correlated to the pathological state of the tissue we use a Hybrid Feature Selection algorithm based on mutual information. System-dependent effects are removed through predictive deconvolution and this operation results in increasing quality of images and discriminating power of features. A comparison of the classification model applied before and after deconvolution shows a gain in accuracy and area under the ROC curve. The use of deconvolution as preprocessing step in CAD schemes can improve prostate cancer detection.
2007
Optimal Lattices in Volume Graphics
Thorsten Möller, Simon Frazer University, Canada
Seminar • 12 January 2007 • BM 4.235 • 00080
Abstract: The Body-Centred Cubic (BCC) as well as the Face-Centred Cubic (FCC) have been known to be advantageous over the Cartesian Cubic (CC) for a while, but have found almost no attention by the practitioner. In this talk I will try to explain in what sense these lattices are advantageous and how some of our work could make them more practical and feasible for applications ranging from medical imaging to scientific computing.
Parallel MRI: Principles and Image Reconstruction
Klaas Prüssmann, Institute for Biomedical Engineering, ETH, Zürich
Seminar • 05 February 2007 • BM 4.235 • 00081
Abstract: Traditional MRI relies exclusively on the use of magnetic gradient fields for spatial signal encoding, leading to Fourier transform as the standard method of MR image reconstruction. This situation has changed fundamentally with the recent advent of so-called parallel MRI methods using arrays of signal detectors. Parallel aquisition enables faster imaging, however it does so at the expense of complications at the image reconstruction stage. The conceptual step from traditional to parallel MRI is marked by the transition from pure Fourier encoding to a more general hybrid encoding paradigm. As a consequence image reconstruction from array data requires more general concepts and algorithms. These will be discussed in detail, covering aspects such as the forward encoding formulation, the spatial response criterion, direct vs. iterative solving, noise propagation and control, and expansions towards perturbations other than reception characteristics.
Fast Wavelet-Regularized Image Deconvolution
Cédric Vonesch, BIG
Test Run • 03 April 2007 • BM 4.235 • 00082
Abstract: We present a modified version of the deconvolution algorithm introduced by Figueiredo and Nowak, which leads to a substantial acceleration. The algorithm essentially consists in alternating between a Landweber-type iteration and a wavelet-domain denoising step. Our key innovations are 1) the use of a Shannon wavelet basis, which decouples the problem accross subbands, and 2) the use of optimized, subband-dependent step sizes and threshold levels. At high SNR levels, where the original algorithm exhibits slow convergence, we obtain an acceleration of one order of magnitude. This result suggests that wavelet-domain L1- regularization may become tractable for the deconvolution of large datasets, e.g. in fluorescence microscopy.
Regularized Interpolation for Noisy Data
Sathish Ramani, BIG
Test Run • 03 April 2007 • BM 4.235 • 00083
Abstract: Interpolation is a vital tool in biomedical signal processing. Although there exists a substantial literature dedicated to noise-free conditions, much less is known in the presence of noise. Here, we document the breakdown of standard interpolation for noisy data and study the performance improvement due to regularized interpolation. In particular, we numerically investigate the Tikhonov (quadratic) regularization. On top of that, we explore non-quadratic regularization and show that this yields further improvements. We derive a novel bounded regularization approach to determine the optimal solution. We justify our claims with experimental results.
Sub-Resolution Maximum-Likelihood Based Localization of Fluorescent Nanoparticles in Three Dimensions
François Aguet, BIG
Test Run • 03 April 2007 • BM 4.235 • 00084
Abstract: Several recent studies have shown that fluorescent particles can be localized with an accuracy that is well beyond traditional resolution limits. Using a theoretical model of the image formation process that accounts for possible sources of noise, Cramer-Rao bounds have been used to define the theoretical limits. A crucial influence on these bounds is the mismatch of refractive indices that is usually present between immersion medium and specimen. This results in an axially shift-variant point spread function, meaning that the bounds change as a function of the particle's position in the z-direction. We investigate the theoretical bounds for this shift-variant model, and propose a maximum-likelihood estimator for the particle position in 3D (XYZ position). Using this estimator, sub-resolution localization at the nanometer scale is demonstrated on experimental data. The results provide optimal conditions for particle tracking and localization experiments.
Wavelet-based Statistical Analysis for Optical Imaging in Mouse Olfactory Bulb
Michael Unser, BIG
Test Run • 03 April 2007 • 00085
Non-iterative Exact Signal Recovery in Frequency Domain Optical Coherence Tomography
Chandra Sekhar Seelamantula, BIG
Test Run • 03 April 2007 • BM 4.235 • 00086
Abstract: We address the problem of exact signal recovery in frequency domain optical coherence tomography (FDOCT) systems. Our technique relies on the fact that, in a spectral interferometry setup, the intensity of the total signal reflected from the object is smaller than that of the reference arm. We develop a novel algorithm to compute the reflected signal amplitude from the interferometric measurements. Our technique is non-iterative, non-linear and it leads to an exact solution in the absence of noise. The reconstructed signal is free from artifacts such as the autocorrelation noise that is normally encountered in the conventional inverse Fourier transform techniques. We present results on synthesized data where we have a benchmark for comparing the performance of the technique. We also report results on experimental FDOCT measurements of the retina of the human eye.
A New Technique for High-Resolution Frequency Domain Optical Coherence Tomography
Chandra Sekhar Seelamantula, BIG
Test Run • 03 April 2007 • BM 4.235 • 00087
Abstract: Frequency domain optical coherence tomography (FDOCT) is a new technique that is well-suited for fast imaging of biological specimens, as well as non-biological objects. The measurements are in the frequency domain, and the objective is to retrieve an artifact-free spatial domain description of the specimen. In this paper, we develop a new technique for model-based retrieval of spatial domain data from the frequency domain data. We use a piecewise-constant model for the refractive index profile that is suitable for multi-layered specimens. We show that the estimation of the layered structure parameters can be mapped into a harmonic retrieval problem, which enables us to use high-resolution spectrum estimation techniques. The new technique that we propose is efficient and requires few measurements. We also analyze the effect of additive measurement noise on the algorithm performance. The experimental results show that the technique gives highly accurate parameter estimates. For example, at 25dB signal-to-noise ratio, the mean square error in the position estimate is about 0.01% of the actual value.
Image Denoising by Pointwise Thresholding of the Undecimated Wavelet Coefficients: A Global SURE Optimum
Florian Luisier, BIG
Test Run • 03 April 2007 • BM 4.235 • 00088
Abstract: We devise a new undecimated wavelet thresholding for denoising images corrupted by additive Gaussian white noise. The first key point of our approach is the use of a linearly parameterized pointwise thresholding function. The second key point consists in optimizing the parameters globally by minimizing Stein's unbiased MSE estimate (SURE) directly in the image-domain, and not separately in the wavelet subbands. Amazingly, our method gives similar results to the best state-of-the-art algorithms, despite using only a simple pointwise thresholding function; we demonstrate it in simulations over a wide range of noise levels for a representative set of standard grayscale images.
Reconstruction of Dynamic PET Data Using Spatio-Temporal Wavelet L_1 Regularization
Jeroen Verhaeghe, Ghent University, Ghent, Belgium
Seminar • 30 May 2007 • BM 5.202 • 00089
Abstract: Tomographic reconstruction from PET data is an ill-posed problem that requires regularization. Recently, Daubechies et al. proposed an L_1 regularization of the wavelet coefficients that can be optimized using iterative thresholding schemes. In this paper, we extend this approach for the reconstruction of dynamic (spatio-temporal) PET data. Instead of using classical wavelets in the temporal dimension, we introduce exponential-spline wavelets that are specially tailored to model time activity curves (TACs) in PET. We show the usefulness of spatio-temporal regularization and the superior performance of E-spline wavelets over conventional Battle-Lemarie wavelets for a 1-D TAC fitting experiment and a tomographic reconstruction experiment.
Speech and Audio Compression-The Twain Shall Meet
Chandra Sekhar Seelamantula, BIG
Seminar • 18 July 2007 • BM 4.235 • 00090
Abstract: Speech and music signals are nonstationary in nature and their spectral properties, statistics, perceptual attributes such as pitch etc. change with time. A parsimonious representation of these signals is obtained by using nonstationary signal models instead of stationary ones. We consider the envelope-frequency model and propose a new zerocrossing technique for computing these parameters for a given signal in a multiband framework. These parameters are amenable to efficient quantization using psychoacoustic criteria. Preliminary experimental results show that the compression performance thus achieved is comparable to that obtained with standard coding techniques such as the MP3 and MPEG-2 AAC. We also discuss issues related to blocking artifacts and dynamic auditory perception. The talk will be followed by an audio demonstration. This talk is based on a part of my Ph.D thesis submitted to the Indian Institute of Science. This work was done in collaboration with Prof. T.V. Sreenivas, IISc.
Wavelet Bases Solving Infrared Divergence Phenomenon
Béatrice Vedel, Université de Picardie-Jules Verne, Amiens, France
Seminar • 19 July 2007 • BM 5.202 • 00091
Abstract: The infrared divergence phenomenon often appears in the wavelet analysis of self-similar objects (solutions of PDE, homogeneous functional spaces, self-similar stochastic processes, ...). These objects -built as "fractional primitive" of classical objects (Lebesgue spaces, white noise ) are a priori not defined in the sense of the tempered distributions and their wavelet expansions might diverge - because of the low-frequency (infrared) part. Focusing on the cases of the homogeneous Sobolev spaces and the Mumford process, we will see how to define them in the sense of distributions - when it is possible (works of G. Boudaud) and we will give an adapted wavelet analysis of them.
A Fast Iterative Thresholding Algorithm for Wavelet-Regularized Deconvolution
Cédric Vonesch, BIG
Test Run • 21 August 2007 • BM 4.235 • 00093
Operator-like Wavelets
Ildar Khalidov, BIG
Test Run • 21 August 2007 • BM 4.235 • 00094
Activelets and Sparsity: a New Way to Detect Brain Activation from fMRI Data
Ildar Khalidov, BIG
Test Run • 21 August 2007 • BM 4.235 • 00095
SURE-LET Interscale-Intercolor Wavelet Thresholding for Color Image Denoising
Thierry Blu, BIG
Test Run • 21 August 2007 • BM 4.235 • 00096
Analytic Sensing: Reconstructing Pointwise Sources from Boundary Laplace Measurements
Thierry Blu, BIG
Test Run • 21 August 2007 • BM 4.235 • 00097
Wavelets, Operators, and Invariance Principles
Michael Unser, BIG
Test Run • 21 August 2007 • BM 4.235 • 00098
Exploratory Analysis of Brain Imaging Data
Tülay Adali, University of Maryland, Baltimore County, USA
Seminar • 20 November 2007 • BM 5.202 • 00099
2006
A Sampling Framework for Cerebral Perfusion CT
Pau Montes, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg (D)
Seminar • 01 February 2006 • BM 5.202 • 00072
Abstract: In a perfusion computed tomography protocol, contrast agent is injected to the patient and subsequently a time series of CT images of a region of interest is reconstructed. Due to the relatively high number of repeated CT scans, these must be carried out at a low dose level yielding a notable noise level in the reconstructed images. In this presentation we propose an analysis of the dynamic acquisition process from a sampling point of view. This will lead us to a reconstruction algorithm for objects with time dependent density. Finally, a method will be presented to optimize the SNR of the perfusion sequences obtained.
Image Coding of 3D Volume Using Wavelet Transform for Fast Retrieval of 2D Images
Vijayaraghavan Thirumalai, EPFL
Seminar • 30 January 2006 • BM 4.235 • 00073
Abstract: We propose an encoder/decoder system for 3D volumetric medical data. The system allows fast access to any 2D image by decoding only the relevant information from each subband image and thus provides minimum decoding time. This will be of immense use for medical community, because most of the CT, MRI and PET modalities produce volumetric data. Since a full-fledged 3D wavelet transform is used for compression, the advantage of good compression ratio is preserved. Preprocessing is carried out prior to wavelet transform, to enable easier identification of coefficients from each subband image. Inclusion of special characters in the bit stream (markers) facilitates access to corresponding information from the encoded data. Experiments are carried out by performing Daub4 filter along x (row), y (column) direction and Haar filter along z (slice) direction to account for difference between interslice and intraslice resolution. The performance of the system has been evaluated on four sets of volumetric data and the results are compared to other 3D encoding/2D decoding schemes. Results show that for slice spacing of 3-10 mm, there is substantial improvement in decoding time. The speedup is found to be approximately 2.
Improved MRSI With Field Inhomogeneity Compensation
Ildar Khalidov, BIG
Test Run • 08 February 2006 • BM 4.235 • 00074
Abstract: Magnetic resonance spectroscopy imaging (MRSI) is a promising and developing tool in medical imaging. Because of various difficulties imposed by the imperfections of the scanner and the reconstruction algorithms, its applicability in clinical practice is rather limited. In this paper, we suggest an extension of the constrained reconstruction technique (SLIM). Our algorithm, named B-SLIM, takes into account the the measured field inhomogeneity map, which contains both the scanner's main field inhomogeneity and the object-dependent magnetic susceptibility effects. The method is implemented and tested both with synthetic and physical two-compartment phantom data. The results demonstrate significant performance improvement over the SLIM technique. At the same time, the algorithm has the same computational complexity as SLIM.
A Strategy Based on Maximum Spanning Trees to Stitch Together Microscope Images
Philippe Thévenaz, BIG
Test Run • 08 February 2006 • BM 4.235 • 00075
Abstract: Assembling partial views is an attractive means to extend the field of view of microscope images. In this paper, we propose a semi-automated solution to achieve this goal. Its intended audience is the microscopist who desires to scan a large area while acquiring a series of partial views, but who does not wish toor cannotplanify the path of the scan. In a first stage, this freedom is dealt with by interactive manipulation of the resulting partial views, or tiles. In a second stage, the position of the tiles is refined by a fully automatic pairwise registration process. The contribution of this paper is a strategy that determines which pairs of tiles to register, among all possible pairs. The central tenet of our proposed strategy is that two tiles that happen to possess a large common area will register with higher accuracy than two tiles with a smaller overlap. Our strategy is then to minimize the number of pairwise registrations while maximizing the global amount of overlap, and while ensuring that the local registration efforts are sufficient to link all tiles together to yield a global mosaic. By stating this requirement in a graph-theoretic context, we are able to derive the optimal solution thanks to Kruskal's algorithm.
Digital Image Processing, Experimental Audio Filtering and Wavelet & Filter Banks
Aaron Hurley, NUI, Galway Mathematics Department, Ireland
Seminar • 09 March 2006 • BM.5.202 • 00076
Abstract: It will I suppose be mainly of an introductory nature covering the following topics at the very least: 1. Spatial Domain Image Processing 2. Frequency Domain Image Processing 3. Cross Filtering in Audio 4. Subband Coding 5. The Dialation and Wavelet Equations with reference to filter banks and perfect reconstruction. I will open by describing the set up of the course but will emphasise the 'signal' processing aspects.
Fast Multipole Method
Jan Kybic, Czech Technical University, Center for Machine Perception
Seminar • 28 March 2006 • BM.5.202 • 00077
Abstract: The accurate solution of the forward electrostatic problem is an essential first step before solving the inverse problem of magneto- and electro-encephalography (MEG/EEG). The symmetric Galerkin boundary element method is accurate but cannot be used for very large problems because of its computational complexity and memory requirements. We describe a~fast multipole-based acceleration for the symmetric boundary element method (BEM). It creates a hierarchical structure of the elements and approximates far interactions using spherical harmonics expansions. The accelerated method is shown to be as accurate as the direct method, yet for large problems it is both faster and more economical in terms of memory consumption.
More Info ...Joint Texture and Topography Estimation for Extended Depth of Field in Brightfield Microscopy
François Aguet, BIG
Test Run • 31 March 2006 • BM 4.135 • 00078
Abstract: Brightfield microscopy often suffers from limited depth of field, which prevents thick specimens from being imaged entirely in-focus. By optically sectioning the specimen, the in-focus regions can be acquired over multiple images. Extended depth of field methods aim at combining the information from these images into a single in-focus image of the texture on the specimen's surface. The topography provided by these methods is limited to a map of the selected in-focus image for every pixel and is inherently discretized, which limits its use for quantitative evaluation. In this paper, we propose a joint texture and topography estimation, based on an image formation model for a thick specimen incorporating the point spread function. The problem is stated as a least-squares fitting where the texture and the topography are updated alternately. The method also acts as a deconvolution operation when the in-focus image has some blur left, or when the true in-focus position falls in-between two slices. The feasibility of the method is demonstrated with simulated and experimental results.
Large-Scale Histology of the Pituitary Gland: Towards a Reconstruction of a 3D Secretory Cellular Network
François Molino, Université de Montpellier, France
Seminar • 13 June 2006 • BM.1.130 • 00079
Abstract: The pituitary gland comprises different secretory cellular types, involved in major endocrine regulations (growth, maturation, etc...). Among them, the growth hormone secreting cells (GH cells) comprise half the total of cells in the gland. A major challenge of growth hormone regulation is to understand the role of the coupling between these cells in a network structure, in terms of secretion efficiency and regulation. It is known that a ratio of up to 1/1000 exist between the secretory capacity of isolated versus functionnally connected cells, under pharmacological stimulation. Up to now the description of this functional cellular network has not been integrated in any physiological model. Simplistic characterisations, related to the number of cells are the only one available. To achieve any physiological understanding, one must first succeed in describing the actual network in terms of space position of the cells and connectivity. Our goal is to use trangenic mice in which the GH factor is co-expressed with a green fluorescent protein, and two photon microscopy on the gland, to obtain the complete 3D structure of the network. This implies solving different problems of image analysis, which my talk will try to list, suggesting strategies of solution as developped in our lab.
More Info ...2005
Think Analog-Act Digital
Prof. Michael Unser, Biomedical Imaging Group, EPFL
Seminar • 07 January 2005 • CO-017 • 00012
Abstract: By interpreting the Green-function reproduction property of exponential splines in signal-processing terms, we uncover a fundamental relation that connects the impulse responses of all-pole analog filters to their discrete counterparts. The link is that the latter are the B-spline coefficients of the former (which happen to be exponential splines). Motivated by this observation, we introduce an extended family of cardinal splinesthe generalized E-splines to generalize the concept for all convolution operators with rational transfer functions. We construct the corresponding compactly supported B-spline basis functions which are characterized by their poles and zeros, thereby establishing an interesting connection with analog-filter design techniques. We investigate the properties of these new B-splines and present the corresponding signal-processing calculus, which allows us to perform continuous-time operations such as convolution, differential operators, and modulation, by simple application of the discrete version of these operators in the B-spline domain. In particular, we show how the formalism can be used to obtain exact, discrete implementations of analog filters. We also apply our results to the design of hybrid signal-processing systems that rely on digital filtering to compensate for the non-ideal characteristics of real-world A-to-D and D-to-A conversion systems. The seminar will be followed by an aperitif and a slide show on the speaker's recent trip to India in BM 5.202.
Improving the Quality of 3DEM Biological Information by the Use of New Advanced Reconstruction Algorithms
Carlos Óscar Sanchez Sorzano, EPFL LIB
Test Run • 06 May 2005 • 00024
Abstract: The knowledge of the spatial conformation of a biological macromolecule is key for understanding the mechanisms involved in its interaction with other particles. 3D Electron Microscopy is a structural technique with a wide range of applications. However, the produced images suffer from a high level of noise and strong distortions introduced by the electron microscope. In this context, robust image processing algorithms are essential to reliably extract the spatial information. In this talk some improvements on the 3D reconstruction step and their posterior application to experimental data will be presented.
Direct Measurement of Myocardial Thickening by Means of Computer Vision
Michael Sühling, EPFL LIB
Meeting • 06 May 2005 • 00025
Abstract: The quantitative assessment of cardiac motion and deformation is a fundamental concept to evaluate ventricular mal-functioning. The analysis of regional wall thinning and thickening, in particular, allows to identify non-contracting regions of the myocardium and to differentiate active from passive tissue. We introduce a novel image processing technique to measure myocardial motion and deformation from dynamic B-mode echocardiograms. In contrary to existing Tissue-Doppler methods, which are limited to the ultrasonic beam direction, the proposed method yields 2D motion and deformation information. The method was tested on synthetic data sets produced by image warping. It was also applied to a range of clinical echocardiograms. Deformation results from real echocardiograms were in good agreement with the expert echocardiographic reading.
A Complete Family of Scaling Functions: The (alpha, tau)-Fractional Splines
Thierry Blu, EPFL LIB
Test Run • 31 March 2005 • 00026
Abstract: We describe a new family of scaling functions, the (alpha, tau)-fractional splines,
which generate valid multiresolution analyses. These functions are characterized by
two real parameters: alpha, which controls the width of the scaling functions; and tau,
which specifies their position with respect to the grid (shift parameter).
This new family is complete in the sense that it is closed under convolutions and
correlations. We give the explicit time and Fourier domain expressions of these
fractional splines.
We prove that the family is closed under generalized fractional
differentiations, and, in particular, under the Hilbert transformation. We also show
that the associated wavelets are able to whiten 1/f-type noise, by an adequate tuning
of the spline parameters.A fast (and exact) FFT-based implementation of the fractional
spline wavelet transform is already available. We show that fractional integration
operators can be expressed as the composition of an analysis and a synthesis iterated filterbank.
Extended Depth-of-Focus for Multi-Channel Microscopy Images: A Complete Wavelet Approach
Brigitte Forster, EPFL LIB
Test Run • 06 April 2005 • BM 4.235 • 00036
Abstract: Microscopy imaging often suffers from limited depth-of-focus. However, the specimen can be optically sectioned by moving the object along the optical axis; different areas appear in focus in different images. Extended depth-of- focus is a fusion algorithm that combines those images into one single sharp composite. One promising method is based on the wavelet transform. In this paper, we show how the wavelet-based image fusion technique can be improved and easily extended to multi-channel data. First, we propose the use of complex-valued wavelet bases, which seem to outperform traditional real-valued wavelet transforms. Second, we introduce a way to apply this technique for multi-channel images that suppresses artifacts and does not introduce false colors, an important requirement for multi-channel fluorescence microscopy imaging. We evaluate our method on simulated image stacks and give results relevant to biological imaging.
Bimodal Myocardial Motion Analysis from B-Mode and Tissue Doppler Ultrasound
Michael Sühling, EPFL LIB
Test Run • 06 April 2005 • BM 4.235 • 00037
Abstract: We present a new method for estimating heart motion from two-dimensional echocardiographic sequences by exploiting two ultrasound modalities: B-mode and tissue Doppler. The algorithmestimates a two-dimensional velocity field locally by using a spatial affine velocity model inside a sliding window. Within each window, we minimize a local cost function that is composed of two quadratic terms: an optical flow constraint that involves the B-mode data and a constraint that enforces the agreement of the velocity field with the directional tissue Doppler measurements. The relative influence of the two differentmodalities to the resulting solution is controlled by an adjustable weighting parameter. Robustness is achieved by a coarse-to-fine multi-scale approach. The method was tested on synthetic ultrasound data and validated by a rotating phantom experiment. First applications to clinical echocardiograms give promising results.
Wavelet-based Synchronization of Nongated Confocal Microscopy Slice-Sequences for 4D Cardiac Imaging in Living Embryos
Michael Liebling, California Institute of Technology
Seminar • 09 February 2005 • BM.1.119 • 00051
Abstract: With the availability of new confocal laser scanning microscopes, fast biological processes, such as the blood flow in living organisms at early stages of the embryonic development, can be observed with unprecedented time resolution. When the object under study has a periodic motion (e.g., a beating embryonic heart), the imaging capabilities can be extended to retrieve 4D data. We acquire nongated slice sequences at increasing depths and retrospectively synchronize them to build dynamic 3D volumes. I will present a synchronization procedure based on the temporal correlation of wavelet features. The method is designed to handle large data sets and to be immune to artifacts that are frequent in fluorescence imaging techniques such as bleaching, nonuniform contrast, and photon-related noise. This method has allowed us to create 4-dimensional working models of the heart and extract quantitative information on flow and heartwall motions from different stages of heart development. This is joint work with A.S. Forouhar, M. Gharib, S.E. Fraser, and M.E. Dickinson.
More Info ...Exponential-Spline Wavelet Bases
Michael Unser, BIG, EPFL
Test Run • 16 March 2005 • BM 4.235 • 00052
Abstract: We build a multiresolution analysis based on shift-invariant exponential B-spline spaces. We construct the basis functions for these spaces and for their orthogonal complements. This yields a new family of wavelet-like basis functions of L2, with some remarkable properties. The wavelets, which are characterized by a set of poles and zeros, have an explicit analytical form (exponential spline). They are non-stationary is the sense that they are scale-dependent and that they are not necessarily the dilates of one another. They behave like multi-scale versions of some underlying differential operator $\Lop$; in particular, they are orthogonal to the exponentials that are in the null space of $\Lop$. The corresponding wavelet transforms are implemented efficiently using an adaptation of Mallat's filterbank algorithm.
Generalized Daubechies wavelets
Cedric Vonesch, BIG, EPFL
Test Run • 16 March 2005 • BM 4.235 • 00053
Abstract: We present a generalization of the Daubechies wavelet family. The context is that of a non-stationary multiresolution analysis --- i.e., a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. The constraints that we impose on these scaling functions are: (1) orthogonality with respect to translation, (2) reproduction of a given set of exponential polynomials, and (3) minimal support. These design requirements lead to the construction of a general family of compactly-supported, orthonormal wavelet-like bases of L2. If the exponential parameters are all zero, then one recovers Daubechies wavelets, which are orthogonal to the polynomials of degree (N-1) where N is the order (vanishing-moment property). A fast filterbank implementation of the generalized wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scale-dependent. The new transforms offer increased flexibility and are tunable to the spectral characteristics of a wide class of signals.
Hierarchical Annealing for the Synthesis of Porous Media Images
Simon Alexander, Department of Applied Mathematics, University of Waterloo, Canada
Seminar • 16 June 2005 • BM.1.119 • 00054
Abstract: While motivated by a particular application, the work described is quite generalizable. Although present in the literature, Simulated Annealing for the synthesis of porous media has met with limited success due to computational costs and practical modelling constraints. An alternative method based on hierarchical annealing will be presented. Inherently multiscale, such approaches may dramatically reduce the computational cost. Energy functions (based on e.g. chord-length distribution) in a hierarchy allow separately treating structures of different length scales, reducing convergence issues for samples with multiple natural scales. Such an approach naturally leads to methods of explicitly multiscale modelling.
More Info ...WSPM: A new approach for wavelet-based statistical analysis of fMRI data
Dimitri Van De Ville, BIG, EPFL
Test Run • 09 June 2005 • BM 4.235 • 00055
Abstract: Recently, we have proposed a new framework for detecting brain activity from fMRI data, which is based on the spatial discrete wavelet transform. The standard wavelet-based approach performs a statistical test in the wavelet domain, and therefore fails to provide a rigorous statistical interpretation in the spatial domain. The new framework provides an integrated approach: the data is processed in the wavelet domain (e.g., by thresholding wavelet coefficients), and a suitable statistical testing procedure is applied afterwards in the spatial domain. This method is based on conservative assumptions only and has a strong type-I error control by construction. At the same time, it has a sensitivity comparable to that of SPM. Here, we focus on the central paradigm of our framework, which separates approximation (obtained by processing the wavelet coefficients) and statistical testing. Interestingly, such a decoupling offers high flexibility on the type of processing that can be done in the wavelet domain. For example, we discuss the use of a redundant discrete wavelet transform, which provides a shift-invariant detection scheme. The key features of our technique are illustrated with experimental results. An implementation of our framework will be available as a toolbox (WSPM) for the SPM2 software
Stochastic Resonance and Its Signal-Processing Applications
Prof. G.V. Anand, Indian Institute of Science, Bangalore
Seminar • 24 June 2005 • BM.2.131 • 00056
Abstract: The phenomenon of stochastic resonance (SR), exhibited by the class of multistable nonlinear systems can be described as follows: The output SNR of the system shows a non-monotonic variation as the input noise intensity is varied at a fixed input signal power. We consider a symmetric 3-level quantizer as a simmple example of an SR system. For a given quantizer threshold, the output SNR attains a peak at the optimal value of input noise variance which depends on the noise pdf. Conversely, for a fixed input noise, the output SNR may be maximized by an optimal choice of the quantizer threshold. The peak SNR gain may exceed unity if the noise pdf is 'sufficiently' non-Gaussian. This phenomenon of SNR enhancement may be exploited in many signal processing applications involving non-Gaussian noise. We consider two applications, viz. signal dtection and direction-of-arrival estimation. We show that the performance of the processors at low SNR can be enhanced significantly by the use of SR, with negligible increase in computational or hardware complexity.
More Info ...Multivariate Statistical Analysis and Classification of Zygotes
Antoine Beuchat, BIG
Seminar • 01 July 2005 • BM 4.235 • 00057
Abstract: The aim of this study is to assess if morphological characteristics of zygotes from several IVF centers can be used as markers of future embryo developmental competence using statistical tools.
Fast Directional Convolution and Correlation on the Sphere
Adrien Depeursinge, Programme High-Tech, Lausanne, Switzerland
Seminar • 01 July 2005 • 00058
Abstract: The Cosmic Microwave Background (CMB) constitutes a "photography" of the early universe. A directional wavelet analysis of the spherical maps of the CMB anisotropies could bring fundamental elements to the comprehension of the evolution and the structure of our universe. To carry out this wavelet analysis, we need efficient algorithms on the sphere.
Linear Image Reconstruction from Scale Space Interest Points
Bart J. Janssen, Technische Universiteit Eindhoven, Eindhoven, The Netherlands
Seminar • 01 July 2005 • 00059
Abstract: Exploration of information content of features that are present in the scale space of an image has led to the development of reconstruction algorithms. These algorithms aim for a reconstruction from the features that is visually close to the image from which the features are extracted. We propose a linear reconstruction framework that generalizes a previously proposed scheme. As an example we propose a prior that is a norm formed by a Sobolev type inner product. We apply the reconstruction algorithm to the reconstruction from non-Morse critical points of a scale space image. Scale is taken as a control parameter. These type of points are also referred to in the literature as degenerate spatial critical points or as toppoints or catastrophes.
Reinforcement Learning in the Brain
Vasu Singh, ICI Doctoral Fellow, Lausanne, Switzerland
Seminar • 01 July 2005 • 00060
Abstract: Learning from rewards is a basic principle for adaptation of animals to their environment. This reward based learning is called reinforcement learning (RL). After an introduction to RL and basic physiology of the brain, I shall describe a few models used to account for RL.
Self-similarity: from Fractals to Splines
Michael Unser, Biomedical Imaging Group, EPFL
Seminar • 08 July 2005 • Bm 4.205 • 00061
Abstract: In this talk, we will show how the concept of self-similarity can be used as a bridge for connecting splines and fractals. Our starting point is the identification of the class of differential operators L that are both shift- and scale-invariant. This results in a family of generalized fractional derivatives indexed by two parameters. We specify the corresponding L-splines, which yield an extended class of fractional splines. The operator L also defines an energy measure, which can be used as a regularization functional for fitting the noisy samples of a signal. We show that, when the grid is uniform, the corresponding smoothing spline estimator is a cardinal fractional spline that can be computed efficiently by means of an FFT-based filtering algorithm.
Using Gelfand's theory of generalized stochastic processes, we then prove that the above fractional derivatives act as the whitening operators of a class of self-similar processes that includes fractional Brownian motion. Thanks to this result, we show that the fractional smoothing spline algorithm can be used to obtain the minimum mean square error (MMSE) estimation of a self-similar process at any location, given a series of noisy measurements at the integers. This proves that the fractional splines are the optimal function spaces for estimating fractal-like processes; it also provides the optimal regularization parameters.
This is joint work with Thierry Blu.
Which Wavelet Bases are the Best for Image Denoising?
Florian Luisier, BIG
Test Run • 28 July 2005 • BM 4.235 • 00062
Abstract: We use a comprehensive set of non-redundant orthogonal wavelet transforms and apply a denoising method called SUREshrink in each individual wavelet subband to denoise images corrupted by additive Gaussian white noise. We show that, for various images and a wide range of input noise levels, the orthogonal fractional (alpha, tau)-B-splines give the best peak signal-to-noise ratio (PSNR), as compared to standard wavelet bases (Daubechies wavelets, symlets and coiflets). Moreover, the selection of the best set (alpha, tau) can be performed on the MSE estimate (SURE) itself, not on the actual MSE (Oracle).
Finally, the use of complex-valued fractional B-splines leads to even more significant improvements; they also outperform the complex Daubechies wavelets.
Generalized L-Spline Wavelet Bases
I. Khalidov, BIG
Test Run • 28 July 2005 • BM 4.235 • 00063
Abstract: We build wavelet-like functions based on a parametrized family of pseudo-differential operators L_nu that satisfy some admissibility and scalability conditions. The shifts of the generalized B-splines, which are localized versions of the Green function of L_nu, generate a family of L-spline spaces. These spaces have the approximation order equal to the order of the underlying operator. A sequence of embedded spaces is obtained by choosing a dyadic scale progression a = 2^i. The consecutive inclusion of the spaces yields the refinement equation, where the scaling filter depends on scale. The generalized L-wavelets are then constructed as basis functions for the orthogonal complements of spline spaces. The vanishing moment property of conventional wavelets is generalized to the vanishing null space element property. In spite of the scale dependence of the filters, the wavelet decomposition can be performed using an adapted version of Mallat's filterbank algorithm.
Semi-Orthogonal Wavelets that Behave like Fractional Differentiators
Michael Unser, BIG
Test Run • 28 July 2005 • BM 4.235 • 00064
Generalized Biorthogonal Daubechies Wavelets
Michael Unser, BIG
Test Run • 28 July 2005 • BM 4.235 • 00065
Abstract: We propose a generalization of the Cohen-Daubechies-Feauveau (CDF) and 9/7 biorthogonal wavelet families. This is done within the framework of non-stationary multiresolution analysis, which involves a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. We consider a dual pair of such multiresolutions, where the scaling functions at a given scale are mutually biorthogonal with respect to translation. Also, they must have the shortest-possible support while reproducing a given set of exponential polynomials. This constitutes a generalization of the standard polynomial reproduction property.
The corresponding refinement filters are derived from the ones that were studied by Dyn et al. in the framework of non-stationary subdivision schemes. By using different factorizations of these filters, we obtain a general family of compactly supported dual wavelet bases of L2. In particular, if the exponential parameters are all zero, one retrieves the standard CDF B-spline wavelets and the 9/7 wavelets. Our generalized description yields equivalent constructions for E-spline wavelets.
A fast filterbank implementation of the corresponding wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scale-dependent. This new scheme offers high flexibility and is tunable to the spectral characteristics of a wide class of signals. In particular, it is possible to obtain symmetric basis functions that are well-suited for image processing.
Wavelet Design Using Polyharmonic B-splines
Matthieu Guerquin-Kern, Département EEA, Ecole Normale Supérieure de Cachan - France
Seminar • 19 August 2005 • BM 4.235 • 00066
Abstract: The Isotropic Polyharmonic B-spline family has recently been defined, and used for wavelet design with a two-dimensional quincunx subsampling scheme. In this presentation, we focus on the two-dimensional dyadic scheme, which reduces the number of samples at each iteration by a factor of 4. Therefore, we need to jointly design 3 wavelets instead of 1 in the quincunx case. First, we characterise the wavelet space by 3 so-called pre-wavelets when using the polyharmonic B-spline as a scaling function. Then, we clarify the orthonormality conditions on the wavelet base. We propose a matrix-based method to ensure these conditions in order to construct orthonormal bases for the wavelet space. The design can be guided by a desired tiling in the frequency domain for the support of wavelets. It can be imposed by well-chosen symmetry relations for the solution. According to these mathematical solutions, we propose a Matlab implementation to compute wavelet filters. Finally, we implement the wavelet transform using an iterated filterbank algorithm. We apply the transform to test images.
Optoacoustic Imaging Modality in Medical Ultrasound Devices
Michael Jaeger, Group of Biomedical Photonics, University of Bern
Seminar • 22 August 2005 • BM 5.202 • 00067
Abstract: For non-invasive diagnostics of cancerous tissue in an early stage, optoacoustic imaging could be highly promising as a modality in common medical ultrasound devices. Optoacoustic (OA) imaging is based on the detection of pressure transients generated by the optical absorption of ns laser pulses. Our goal is to implement an efficient algorithm for calculating in real time tomographies from OA signals collected by a B-scan ulstrasound device. We will give an introduction into the principles of medical optoacoustics and present our methods and latest results. Our aim is to facilitate a discussion about how to do the reconstruction in the most efficient way and maybe initiate an interesting collaboration with the Biomedical Imaging Group.
From Scattered Data Interpolation to Locally Regular Grid Approximation
Prof. Christophe Rabut, Centre de Mathématiques, INSA, Toulouse Cedex, France
Seminar • 29 August 2005 • BM 5.202 • 00068
Abstract: We briefly explain in this talk why polyharmonic splines are particularly well
suited for multivariate scattered data interpolation. However these functions
have an important drawback : the linear system associated to them is full and
poorly conditioned, and the computation itself of a given function may be heavy
and unstable; this is, probably, the main reason why polyharmonic splines are
not more widely used.
On the opposite, the use of polyharmonic splines to approximate data on
regular grids is quite easy, thanks to a particularly efficient basis
(so-called polyharmonic B-splines), which is an extension of the polynomial
B-splines in one dimension. But the regularity of the centres of the
so-obtained spline is then an important drawback since in that case we need to
use dense centres, even where the shape of the obtained function does not need
it.
This is why we propose to use locally regular grids, i.e. grids which
are regular on various parts, with a step depending on the considered parts. We
do so in a progressive way, refining the grid in the places where the obtained
function is too far from the data (and only in these parts) , obtaining so a
fine (regular) grid where necessary, and a coarse (regular) grid where
sufficient. Furthermore in such grids, we use a hierarchical B-spline basis,
which means that B-splines with a small step live together with B-splines with
larger step. Some examples show the efficiency of the method, which can also be
used with any B-spline-type function, such as refinable functions or tensor
product polynomial B-splines.
Three-Dimensional Feature Detection Using Optimal Steerable Filters
François Aguet, BIG
Test Run • 07 September 2005 • BM 4.235 • 00069
Abstract: We present a framework for feature detection in 3-D using steerable filters. These filters can be designed to optimally respond to a particular type of feature by maximizing several Canny-like criteria. The detection process involves the analytical computation of the orientation and corresponding response of the template. A post-processing step consisting of the suppression of non-maximal values followed by thresholding to eliminate insignificant features concludes the detection procedure. We illustrate the approach with the design of feature templates for the detection of surfaces and curves, and demonstrate their efficiency with practical applications.
Sampling in Practice: Is the Best Reconstruction Space Bandlimited?
Sathish Ramani, BIG
Test Run • 07 September 2005 • BM 4.235 • 00070
Abstract: Shannon's sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some "shift-invariant '' space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressing the issue of the specification of the best reconstruction space. We consider a realistic setting where a multidimensional signal is prefiltered prior to sampling and the samples corrupted by additive noise. We consider two formulations of the reconstruction problem. In the first deterministic approach, we determine the continuous-space function that minimizes a variational, Tikhonov-like criterion that includes a discrete data term and a suitable continuous-space regularization functional. In the second formulation, we seek the minimum mean square error (MMSE) estimation of the signal assuming that the input signal is a realization of a stationary random process. Interestingly, both approaches yield a solution included in some optimal shift-invariant space that is generally not bandlimited. The solutions can be made equivalent by choosing a regularization operator that corresponds to the whitening filter of the process. We present some practical examples that demonstrate the optimality of the approach.
Spline approximation: 2D reconstruction by optimal quasi-interpolation on Cartesian and hexagonal lattices.
Laurent Condat, Institut National Polytechnique de Grenoble, Grenoble, France
Seminar • 23 November 2005 • BM 4.235 • 00071
Abstract: It is often required to modelize discrete signals by a continuously-defined function, e.g. for resampling purpose. This discrete-to-continuous conversion can be interpreted as a reconstruction problem: given discrete samples of an unknown function, we try to approximate it in a functional space like a spline space. The classical consistent solution consists in choosing the spline that interpolates the data. This solution is not optimal from an approximation theoretic point of view, and quasi-interpolation is often a better alternative. I will show how to design optimal prefilters that give better results in concrete applications like image rotation, hexagonal-to-Cartesian resampling, and demosaicking.
2004
Elastic Registration, Optimization, Angular Assignment in Electron Microscopy
Carlos Óscar Sanchez Sorzano, EPFL LIB
Meeting • 15 January 2004 • EPFL, BM.4.235 • 00005
Abstract: I will summarize my work during the last 10 months. Main topics have been: elastic registration, optimization, angular assignment in Electron Microscopy and resolution assessment.
Robust MSE Estimation: New Methods for Old Problems
Prof. Yonina Eldar, Technion, Israel Institute of Technology, Haifa, Israel
Seminar • 18 February 2004 • EPFL, BM.5.202 • 00006
Abstract:
The problem of estimating a set of unknown deterministic parameters x
observed through a linear transformation H and corrupted by additive
noise, i.e., y = H x + w, arises in a large variety of areas in science
and engineering. Owing to the lack of statistical information about the
parameters x, the estimated parameters are typically chosen to optimize
a criterion based on the observed signal y. For example, the celebrated
least-squares estimator is chosen to minimize the Euclidian norm of the
data error y - y. However, in an estimation context, the objective
typically is to minimize the size of the estimation error x - x,
rather than that of the data error y - y. It is well known that
estimators based on minimizing a data error can lead to a large
estimation error.
In this talk, we introduce a new framework for linear estimation, that
is aimed at developing effective linear estimators which minimize
criteria that are directly related to the estimation error. In
developing this framework, we exploit recent results in convex
optimization theory and nonlinear programming. As we demonstrate, this
framework leads to new, powerful, estimation methods that can
significantly outperform existing estimators such as least-squares and
Tikhonov regularization.
We begin by developing an estimator that minimizes the worst-case
mean-squared error (MSE) over a given region of uncertainty. We then
extend this estimator to include cases in which the model matrix H is
not known precisely. Next, we consider competitive minimax regret
approaches to linear estimation, in which we seek estimators with
performance that is as close as possible to that of the optimal linear
estimator that minimizes the MSE when x is assumed to be known.
Finally, we extend these ideas to multichannel estimation, and present
several examples of applications.
On the Feasibility of Axial Tracking of a Fluorescent Nano-Particle Using a Defocusing Model
Dimitri Van De Ville, EPFL LIB
Test Run • 06 April 2004 • BM 4.235 • 00034
Abstract: The image of a sub-resolution nano-particle in fluorescence microscopy corresponds to a slice of the 3D point spread function (PSF). This slice relates to the out-of-focus distance of the nano-particle. In this paper, we investigate to which extent it is possible to estimate the out-of-focus distance of the nano-particle from a 2D image based on the knowledge of the 3D PSF.To this end, we compute the Cramér-Rao bound (CRB) that provides a lower bound on the error of the best estimator of the axial position. The calculation of the CRB involves the specification of a 3D PSF model, the assumption of a signal-dependent Poisson noise, and some acquisition parameters. Our derivation shows that the CRB depends on the defocusing distance. Interestingly, nanometer precision can be attained over a range of defocus distances and for sufficiently high SNR levels. The theoretical results are confirmed with simulated experiments using estimators based on the least-squares (LS) and normalized cross-correlation (NCC) criterion. The results obtained are very close to the theoretical CRB.
Wavelet-Based fMRI Statistical Analysis and Spatial Interpretation: A Unifying Approach
Michael Unser, EPFL LIB
Test Run • 06 April 2004 • BM 4.235 • 00035
Abstract: Wavelet-based statistical analysis methods for fMRI are able to detect brain activity without smoothing the data. Typically, the statistical inference is performed in the wavelet domain by testing the t-values of each wavelet coefficient; subsequently, an activity map is reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the reconstructed map. In this paper, we propose a new methodology that takes advantage of wavelet processing but keeps the statistical meaning in the spatial domain. We derive a spatial threshold with a proper non-stationary component and determine optimal threshold values by minimizing an approximation error. The sensitivity of our method is comparable to SPM's (Statistical Parametric Mapping).
Polyharmonic Smoothing Splines for Multi-Dimensional Signals with 1/||w||T - Like Spectra
Shai Tirosh, EPFL LIB
Test Run • 10 May 2004 • BM 4.235 • 00038
Abstract: Motivated by the fractal-like behavior of natural images, we propose a new smoothing technique that uses a regularization functional which is a fractional iterate of the Laplacian. This type of functional has previously been introduced by Duchon in the context of radial basis functions (RBFs) for the approximation of non-uniform data. Here, we introduce a new solution to Duchon's smoothing problem in multiple dimensions using non-separable fractional polyharmonic B-splines. The smoothing is performed in the Fourier domain by filtering, thereby making the algorithm fast enough for most multi-dimensional real-time applications.
Quantitative L2 Approximation Error of a Probability Density Estimate Given by It Samples
Thierry Blu, EPFL LIB
Test Run • 10 May 2004 • BM 4.235 • 00039
Abstract: We present a new result characterized by an exact integral expression for the approximation error between a probability density and an integer shift invariant estimate obtained from its samples. Unlike the Parzen window estimate, this estimate avoids recomputing the complete probability density for each new sample: only a few coefficients are required making it practical for real-time applications. We also show how to obtain the exact asymptotic behavior of the approximation error when the number of samples increases and provide the trade-off between the number of samples and the sampling step size.
Variational reconstruction of velocity field from Doppler measurements
Muthuvel Arigovindan, EPFL-LIB
Test Run • 15 June 2004 • BM 4.235 • 00040
Abstract: We present an algorithm to reconstruct velocity field from doppler measurements. We try to reconstruct the true velocity in contrast to the measurement itself, which is the relative velocity along the direction of the measuring beam. We formulate the reconstruction as a minimization, where the cost functional to be minimized is the weighted sum of the deviation from the measurements and a roughness measure.
Hybrid waveform audio models
Dr. Bruno Torresani, Université de Provence, Marseille, France
Seminar • 24 June 2004 • CO-015 • 00041
Abstract: Audiophonic signals have the peculiarity of involving significantly different components (transients, tonals,...). We describe the main features of a novel approach for modeling and coding such signals. The approach combines non-linear transform coding and structured approximation techniques (using simultaneously local cosine and wavelet bases), together with hybrid modeling of the signal class under consideration. In a few words, several different components of the signal are estimated and transform coded using an appropriately chosen orthonormal basis. We will discuss different random signal models and corresponding estimation procedures, and provide numerical results and audio illustrations. This talk is based on joint works with Laurent Daudet and Stéphane Molla, and previous collaborations with P. Guillemain and R. Kronland-Martinet.
Virtual Colonoscopy
Dr. R.M. Summers, National Institutes of Health, USA
Seminar • 05 July 2004 • BM.5.202 • 00042
Abstract:
Colorectal cancer is the second leading cause of cancer death in the Western world. Virtual colonoscopy is a CT-based method that has proven capable of relatively noninvasive colorectal cancer screening. In virtual colonoscopy, three-dimensional reconstructions of the colon are prepared from CT scans of a patient's abdomen and pelvis. My research group has focused for the last several years on computer aided polyp detection for virtual colonoscopy. We have developed shape-based features using differential geometry to identify abnormal growths in the colon. In association with collaborators at other institutions, we have developed a database of over 1200 proven virtual colonoscopy cases with optical colonoscopy correlation. Using this database, we continue to make advancements in improving sensitivity and reducing the false positive rate. Current work includes image processing to register supine and prone virtual colonoscopy examinations on the same patient, computer-aided detection of normal colonic features that mimic pathology, and software systems to train classifiers, validate results, and ensure software reliability and integrity. This lecture will provide an overview of the clinical background, mathematical underpinnings, and preliminary clinical trials conducted at the National Institutes of Health.
Ronald Summers, M.D., Ph.D. is a general diagnostic radiologist and image processing researcher at the National Institutes of Health where he has worked for the past 10 years. He received his B.A. degree in physics in 1981 and his M.D. and Ph.D. degrees in 1988, all from the University of Pennsylvania. Following a medical internship, he completed a radiology residency at the University of Michigan in Ann Arbor in 1993. In 1994, he completed an MRI fellowship at Duke University. In 2000, he was awarded the Presidential Early Career Award for Scientists and Engineers by President William Clinton. His research focuses on virtual endoscopy and computer aided detection from radiologic images. He has authored or co-authored over 70 publications and has several patents.
Ronald M. Summers, M.D., Ph.D.
Chief, Clinical Image Processing Service
Department of Radiology
National Institutes of Health
Building 10 Room 1C660
10 Center Drive MSC 1182
Bethesda, MD 20892-1182
Phone: (301) 496-7700
Fax: (301) 496-9933
E-mail: rms@nih.gov
http://www.cc.nih.gov/drd/summers.html
Statistical approaches to local motion estimation
Prof. Rudolf Mester, Goethe-University, Frankfurt, Germany
Seminar • 25 June 2004 • BM.5.202 • 00043
Abstract: The natural characteristics of image signals and the statistics of measurement noise are decisive for designing optimal filter sets and optimal estimation methods in signal processing. The talk will discuss two areas where these principle is applied to the field of local motion estimation: First, the estimation of local motion is considered to be equivalent to an optimum signal subdivision into an ideally oriented signal, parameterized by the motion direction, and an additive noise component. For optimizing this subdivision and finding the motion direction, models for the signal (i.e. its autocovariance) and for the noise are required. For practical implementations, this motivates quite naturally to employ the theory of steerable filters and leads to an extension of the classical tensor-based motion estimation scheme. Secondly, the talk will discuss how a Wiener-type MMSE filtering of the image signal, based on a simple covariance model for moving images, helps in designing appropriate filter sets for differential or tensor-based methods in optical flow estimation. This approach provides means for integrating prior knowledge on the distribution of expected motion vectors and possibly non i.i.d. noise statistics (colored noise or oriented disturbances).
About the author:
Rudolf Mester (*1958) studied electrical engineering with an emphasis on communication technology at the Technical University of Aachen, Germany. After having obtained his diploma degree in 1983, he performed research work in industrial and academic projects on image processing and image coding and earned his doctoral degree (Dr.-Ing.) from RWTH Aachen in 1988 with a thesis on statistical model based image segmentation. After a short period with Philips Data Systems, Siegen, Dr. Mester joined the Communications Research Institute of Robert Bosch GmbH, Hildesheim, where he established a computer vision and image interpretation group. He initiated and conducted numerous internal as well as several national and European joint research projects especially in the field of applying computer vision to traffic-related problems and security systems. In October 1995, Dr. Mester was appointed professor of applied physics at Goethe University, Frankfurt am Main. Currently, his research interests are focussed on statistical signal and image processing methods, the construction of robust and reliable vision algorithms and flexible vision systems as well as the theoretical foundations for "seeing machines".
Nonlinear signal processing techniques for direction-of-arrival estimation
Prof. G.V. Anand, Indian Institute of Science, Bangalore
Seminar • 28 June 2004 • BM 5.202 • 00044
Abstract:
Several high-resolution direction-of-arrival (DOA) estimation techniques such as MUSIC are currently in use. The performance of these techniques is known to degrade as the signal-to-noise ratio (SNR) is decreased, and the degradation becomes unacceptably large for SNR typicaly below 0 dB. This degradation in performance may be arrested by the application of appropriate preprocessing techniques to boost the SNR prior to DOA estimation. In this talk, two SNR enhancement techniques will be presented, viz.(1) wavelet array denoising under Gaussian noise, and (2) stochastic resonance under non-Gaussian noise. Simulation results will be presented to illustrate the improvement in the DOA estimation performance of MUSIC due to the application of these preprocessing techniques.
Biography of the speaker:
G.V. Anand, Professor
Department of Electrical Communication Engineering
Indian Institute of Science
Bangalore 560 012, India
- B.Sc (1962), M.Sc (1964), Osmania University, Hyderabad, India
- Ph.D (1969), Indian Institute of Science, Bangalore
- Joined the faculty of IISc as lecturer, ECE Department, in 1969. Professor since 1984
- Commonwealth Academic Staff Fellow, University College, London (1978-79)
- Visiting Scientist, Naval Physical & Oceanographic Laboratory, Cochin,India (1996-97)
- Invited Professor, University of Angers, France (2002 and 2004)
- Visiting Professor, Cankaya University, Ankara, Turkey (2003-04)
- Fellow of Indian Academy of Sciences, Indian National Academy of Engineering, Institution of Electronics and Telecommunication Engineers of India, Acoustical Society of India
- Research interests : mathematical modeling of undersea sound propagation, inverse problems in ocean acoustics, statistical signal processing, nonlinear dynamics
Image Sequence Superresolution--Latest Research Results
Prof. Nirmal K. Bose, Pennsylvania State University, USA
Seminar • 12 July 2004 • 00045
Abstract:
Image sequence superresolution algorithms estimate a high-resolution image with
finer spectral details from multiple low-resolution observations degraded by blur, noise and undersampling. The major advantage of this approach is practicality and cost reduction by applying technical devices in signal processing and mathematical analysis on the images acquired by a low-resolution imaging system which, possibly, could be a video-camera, a prefabricated multisensor array, a vibrating camera, a low-resolution scanning electron microscope or a matrix of optoelectronic sensors, among others.
The dynamism in this area of research is substantiated by not only the voluminous past activity within a reasonably short time span but also the realized need to meet many other remaining challenges.
The aim of this seminar is to provide a technical snapshot of developments to date and introduce promising new developments in both the theory and applications of sequence superresolution imaging
science and technology, including the most recent use of second generation wavelets for the purpose.
About the author:
N.K. Bose is the HRB-Systems Professor of Electrical
Engineering and University Endowed Fetter Fellow at The Pennsylvania
State University, University Park.
He is the author of Applied Multidimensional Systems Theory
(New York: Van Nostrand Reinhold, 1982), Digital Filters (Amsterdam, The
Netherlands: Elsevier, 1985; Malabar, FL: Krieger, 1993), the main author as
well as editor of Multidimensional Systems; Progress, Directions, and Open
Problems (Dordrecht, The Netherlands: Reidel, 1985), coauthor of Neural
Network Fundamentals with Graphs, Algorithms, and Applications (New York:
McGraw-Hill 1996) and Multidimensional
Systems Theory and Applications (Dordrecht, The Netherlands:
Kluwer Academic Publishers, 2003).
He is, since 1990, the founding editor-in-chief of the International
Journal on Multidimensional Systems and Signal Processing and
and has served on the
editorial boards of several other journals.
Professor Bose received several honors and awards,
including, more recently, the Invitational
Fellowship from the Japan Society for the Promotion of Science in 1999,
the Alexander von Humboldt Research Award from Germany in 2000,
and the Charles H. Fetter University Endowed Fellowship in
Electrial Engineering from 2001-2004.
High-Quality Causal Interpolation for Online Unidimensional Signal Processing
Thierry Blu, BIG, EPFL
Test Run • 27 August 2004 • BM 4.235 • 00046
Abstract: We present a procedure for designing interpolation kernels that are adapted to time signals; i.e., they are causal, even though they do not have a finite support. The considered kernels are obtained by digital IIR filtering of a finite support function that has maximum approximation order. We show how to build these kernel starting from the all-pole digital filter and we give some practical design examples.
Model Based Image Reconstruction in MR imaging
Dr. Mathews Jacob, University Of Illinois at Urbana-Champaign, USA
Seminar • 14 October 2004 • BM1.139 • 00047
Abstract: I will present a model-based approach to the reconstruction
of Magnetic Resonance Spectroscopic Imaging (MRSI) data. MRSI is
emerging as a powerful tool to understanding the functioning of the brain by studying the
concentration of different metabolites. The main drawback of this
technique is the long acquisition time, which is mainly due to the large number
of samples to be acquired; this limits the applicability of the method
for diagnosis. Most of the current approaches trade spatial resolution for
spectral information to perform the imaging in a reasonable time.
We present a new approach based using a deformable spatial
compartmental model. The compartments themselves are derived from the
segmentation of a high resolution anatomical image acquired prior to the spectroscopic
scan. Since our model accounts for various non-idealities in the image
formation like presence of magnetic inhomogeneities and difference in imaging
protocols, our approach gives get a good fit to the low resolution
data. We use an iterative optimization algorithm to derive the model parameters.
I will also briefly touch upon a sampling problem in the context of
parallel imaging. We derive an exact expression for the error in the
reconstruction of parallel imaging data. We then proceed to minimize this error by
properly choosing the optimal sampling locations in K-space using a
greedy algorithm.
I will also present some preliminary results.
About the speaker: Mathews Jacob was born on 16th of June 1975 at
Kerala, India. He obtained his B.Tech in ECE from the National
Institute of Technology, Calicut Kerala in 1996. For one year, he
worked with Wipro R&D, Bangalore on hardware design. He received his
M.E in signal processing from the Indian Institute of Science,
Bangalore in 1999 and his Ph.D from the Biomedical Imaging Group at
the Swiss Federal Institute of Technology in 2003 respectively. He is
currently working as a Beckman postdoctoral fellow at the University
of Illinois at Urbana Champaign . His research interests include
sampling theory, steerable filters, shape extraction, image processing
etc.
Isotropic-Polyharmonic BSplines and Wavelets
Dimitri Van De Ville, BIG, EPFL
Test Run • 19 October 2004 • BM 4.235 • 00048
Abstract: We propose the use of polyharmonic B-splines to build non-separable two-dimensional wavelet bases. The central idea is to base our design on the isotropic polyharmonic B-splines, a new type of polyharmonic B-splines that do converge to a Gaussian as the order increases. We opt for the quincunx subsampling scheme which allows us to characterize the wavelet spaces with a single wavelet: the isotropic-polyharmonic B-spline wavelet. Interestingly, this wavelet converges to a combination of four Gabor atoms, which are well separated in frequency domain. We also briefly discuss our Fourier-based implementation and present some experimental results.
ImageJ plug-in for polyharmonic wavelet transform
Cristina Manfredotti
Seminar • 18 November 2004 • BM 4.235 • 00049
Abstract: In this report we present the implementation of an ImageJ plug-in for polyharmonic wavelet transform that uses the quincunx subsampling scheme. First, we define polyharmonic B-splines. Next, we describe the construction of polyharmonic wavelets using a quincunx subsampling scheme. Finally, we describ the plug-ins structure and its implementation in detail. The plug-ins performanceis illustated by some examples.
Transit and Function of T Cells Revealed in Vivo
Mikael Pittet, Director of the Cellular Imaging Program, Harvard Medical School
Seminar • 12 December 2004 • BM.2.131 • 00050
Abstract: T cells provide a complex means of defense against pathogens and cancer. To trigger an efficient immune response, rare T cells capable of recognizing pathogen- or tumor- derived peptides should divide rapidly in lymph nodes and then transit to the disease site where they display immediate effector functions. However, T cell transit to infected tissues or tumors as well as T cell function at the target sites remain largely unexplored, particulalrly in complex in vivo environments. We are employing novel imaging modalities (including fluorescent protein tomography and intravital confocal/two-photon microscopy) for non-invasive in vivo visualization of T cells and tumor cells in various clinical settings. We aim at obtaining insights into (i) the dynamics of T cell trafficking into tumors, and (ii) the functional capacity of T cells to kill tumor cells in vivo. Because anti-tumor T cell responses that develop in cancer patients rarely result in tumor eradication, we are assessing the impact of different immunotherapeutic modalities that may promote homing to tumors as well as effector functions of tumor-specific T cells in vivo.
More Info ...2003
Optimal Steerable Filters for Feature Detection
Mathews Jacob, EPFL LIB
Test Run • 12 September 2003 • 00001
Abstract: We present a new approach for the design of optimal steerable 2-D templates for feature detection. As opposed to classical schemes where the optimal 1-D template is derived and extended to 2-D, we directly obtain the 2-D template. We choose the template from a class of steerable functions based on the analytic optimization of a Canny-like criterion. Our approach gives more orientation selective templates that have simple closed form expression. We illustrate the method with the design of operators for edge and ridge detection and demonstrate their performance improvement in practical applications.
From Signals and Systems to Splines and Vice Versa
Prof. Michael Unser, EPFL LIB
Test Run • 12 September 2003 • 00002
Abstract: We use an elementary signals and systems formulation to derive and explain the main properties of polynomial splines. We then extend the formulation to cardinal exponential splines. In particular, we show that these splines can be expressed as a linear combination of the integer shifts of the Green function of some underlying differential operator. We construct the corresponding compactly-supported B-spline functions and discuss some interesting connections with basic system theory.
Recursive Filtering for Splines on Hexagonal Lattices
Dimitri Van De Ville, EPFL LIB
Test Run • 12 September 2003 • 00003
Abstract: Hex-splines are a novel family of bivariate splines, which are well suited to handle hexagonally sampled data. Similar to classical 1D B-splines, the spline coefficients need to be computed by a prefilter. Unfortunately, the elegant implementation of this prefilter by causal and anti-causal recursive filtering is not applicable for the (non-separable) hex-splines. Therefore, in this paper we introduce a novel approach from the viewpoint of approximation theory. We propose three different recursive filters and optimize their parameters such that a desired order of approximation is obtained. The results for third and fourth order hex-splines are discussed. Although the proposed solutions provide only quasi-interpolation, they tend to be very close to the interpolation prefilter.
Quantitative Supersonic Flow Visualization Using Optical Tomography
Rajesh Langoju, EPFL LIB
Meeting • 30 October 2003 • EPFL, BM.4.235 • 00004
Abstract: It is an experimental report of quantitative imaging in supersonic circular jets using a monochromatic light probe. An expanding cone beam of light interrogates a three dimensional volume of supersonic steady state flow from circular jet. The distortion caused to the spherical wave by presence of the jet is determined through special phase measuring techniques. A cone beam algorithm is used to invert wavefront distortion to changes in refractive index introduced by the flow. The refractive index is converted into density whose cross-sections reveal shock and other characters of the flow.
3D Electron Microscopy: Some Challenges Ahead in Pattern Recognition
Prof. José María Carazo, Universidad Autonoma de Madrid, Spain
Seminar • 04 December 2003 • EPFL, BM.2.135 • 00007
Abstract: The standard methodology for 3D electron microscopy will be reviewed making a special emphasis in pointing a number of methodological bottlenecks that nowadays preclude attaining a resolution below 1 nm in a routine fashion. The challenges will be illustrated with experimental examples in the field of structural studies of replicative helicases.
Exponential-Spline Wavelets
Ildar Khalidov, EPFL LIB
Meeting • 08 December 2003 • EPFL, BM.4.235 • 00008
Abstract: A multiresolution is built using exponential splines as scaling functions. A generalization for systems with rational transfer functions is considered. Taking the wavelet coefficients of a function f then corresponds to applying an operator with rational transfer function to a lowpass-filtered version of f.
Polyharmonics Day: Isotropic Polyharmonic B-Splines and Wavelets
Dimitri Van De Ville, EPFL LIB
Seminar • 14 November 2003 • EPFL, BM.4.235 • 00009
Abstract: We propose the use of polyharmonic B-splines to build multi-dimensional wavelet transforms. These functions are non-separable multi-dimensional basis functions that are based on the localization of radial basis functions
Cardinal Exponential Splines
Prof. Michael Unser, EPFL LIB
Meeting • 08 December 2003 • EPFL, BM.4.235 • 00010
Abstract: Causal exponentials play a fundamental role in classical system theory. Starting from those elementary building blocks, we propose a complete and self-contained signal processing formulation of exponential splines defined on a uniform grid. We specify the corresponding B-spline basis functions and investigate their reproduction properties (Green function and exponential polynomials). We show that the exponential B-spline framework allows an exact implementation of continuous-time signal processing operators including convolution, differential operators, and modulation, by simple processing in the discrete B-spline domain.
Automatic Tracking of Particles in Dynamic Fluorescence Microscopy
Daniel Sage, EPFL LIB
Test Run • 12 September 2003 • 00011
Abstract: We present a new, robust algorithm for tracking fluorescent particles in dynamic image sequences obtained by brightfield or confocal microscopy. Specifically, we consider the problem of extracting the movement of chromosomal telomeres within the nucleus of a budding yeast cell. Our method has three components. The first is an alignment module that compensates for the movement of the biological structure under investigation. In our application, the images are aligned to the center of gravity of the nucleus which is detected by thresholding and fitted with an ellipse. The second step is a Mexican-hat filtering which we show to be optimally tailored to the detection of a Gaussian-like spot in fractal noise. The final component is a tracking algorithm that uses dynamic programming to extract the optimal (x,y,t) trajectory of a particle. We have implemented the method as a Java Plugin for the public-domain ImageJ software. We have applied it to real data and have obtained results that are as goodif not better as manual tracings. Our new algorithm reduces the analysis time of a 300 image sequence from 10 minutes, when it is done manually, to just a few seconds and offers the benefit of reproducibility.
Polyharmonics Day: Multidimensional MRA Based on Polyharmonic Splines
Barbara Bacchelli, Università degli Studi di Milano-Bicocca, Italy
Seminar • 14 November 2003 • EPFL, BM.4.235 • 00013
Abstract: We present a MRA based on non-separable polyharmonic scaling functions. We provide the explicit construction of the filters, and the convergence rate of the multiresolution approximations.
More Info ...Non-Linear Fresnelet Approximation for Interference Term Suppression in Digital Holography
Michael Liebling, EPFL LIB
Test Run • 11 July 2003 • 00014
Abstract: We present a zero-order and twin image elimination algorithm for digital Fresnel holograms that were acquired in an off-axis geometry. These interference terms arise when the digital hologram is reconstructed and corrupt the result. Our algorithm is based on the Fresnelet transform, a wavelet-like transform that uses basis functions tailor-made for digital holography. We show that in the Fresnelet domain, the coefficients associated to the interference terms are separated both spatially and with respect to the frequency bands. We propose a method to suppress them by selectively thresholding the Fresnelet coefficients. Unlike other methods that operate in the Fourier domain and affect the whole spacial domain, our method operates locally in both space and frequency, allowing for a more targeted processing.
A New Family of Complex Rotation-Covariant Multiresolution Bases in 2D
Brigitte Forster, EPFL LIB
Test Run • 11 July 2003 • 00015
Abstract: We present complex rotation-covariant multiresolution families aimed for image analysis. Since they are complex-valued functions, they provide the important phase information, which is missing in the discrete wavelet transform with real wavelets. Our basis elements have nice properties in Hilbert space such as smoothness of fractional order (alpha in R+) and fast decay. The corresponding filters allow a FFT-based implementation and thus provide a fast algorithm for the wavelet transform.
Fractional Wavelets, Derivatives, and Besov Spaces
Michael Unser, EPFL LIB
Test Run • 11 July 2003 • 00016
Abstract: We show that a multi-dimensional scaling function of order alpha (possibly fractional) can always be represented as the convolution of a polyharmonic B-spline of order alpha and a distribution with a bounded Fourier transform which has neither order nor smoothness. The presence of the B-spline convolution factor explains all key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multi-scale differentiation property, and smoothness of the basis functions. The B-spline factorization also gives new insights on the stability of wavelet bases with respect to differentiation. Specifically, we show that there is a direct correspondence between the process of moving a B-spline factor from one side to another in a pair of biorthogonal scaling functions and the exchange of fractional integrals/derivatives on their wavelet counterparts. This result yields two "eigen-relations" for fractional differential operators that map biorthogonal wavelet bases into other stable wavelet bases. This formulation provides a better understanding as to why the Sobolev/Besov norm of a signal can be measured from the lp-norm of its rescaled wavelet coefficients. Indeed, the key condition for a wavelet basis to be an unconditional basis of the Besov space B_q^s(Lp) is that the s-order derivative of the wavelet be in Lp.
Wavelets Versus Resels in the Context of fMRI: Establishing the Link with SPM
Dimitri Van De Ville, EPFL LIB
Test Run • 11 July 2003 • 00017
Abstract: Statistical Parametric Mapping (SPM) is a widely deployed tool for
detecting and analyzing brain activity from fMRI data. One of SPM's
main features is smoothing the data by a Gaussian filter to increase
the SNR. The subsequent statistical inference is based on the
continuous Gaussian random field theory. Since the remaining spatial
resolution has deteriorated due to smoothing, SPM introduces the
concept of ``resels'' (resolution elements) or spatial
information-containing cells. The number of resels turns out to be
inversely proportional to the size of the Gaussian smoother.
Detection the activation signal in fMRI data can also be done by a
wavelet approach: after computing the spatial wavelet transform, a
straightforward coefficient-wise statistical test is applied to detect
activated wavelet coefficients. In this paper, we establish the link
between SPM and the wavelet approach based on two observations. First,
the (iterated) lowpass analysis filter of the discrete wavelet
transform can be chosen to closely resemble SPM's Gaussian filter.
Second, the subsampling scheme provides us with a natural way to define
the number of resels; i.e., the number of coefficients in the lowpass
subband of the wavelet decomposition. Using this connection, we can
obtain the degree of the splines of the wavelet transform that makes it
equivalent to SPM's method. We show results for two particularly
attractive biorthogonal wavelet transforms for this task; i.e., 3D
fractional-spline wavelets and 2D+Z fractional quincunx wavelets. The
activation patterns are comparable to SPM's.
Local Amplitude and Phase Retrieval Method for Digital Holography Applied to Microscopy
Michael Liebling, EPFL LIB
Test Run • 17 June 2003 • 00018
Abstract: We present a numerical two-step reconstruction procedure for digital off-axis Fresnel holograms. First, we retrieve the amplitude and phase of the object wave in the CCD plane. For each point we solve a weighted linear set of equations in the least-squares sense. The algorithm has O(N) complexity and gives great flexibility. Second, we numerically propagate the obtained wave to achieve proper focus. We apply the method to microscopy and demonstrate its suitability for the real time imaging of biological samples.
Angular Assignment Using the Visible Space and a Fast Wavelet Based Correction
Carlos Óscar Sanchez Sorzano, EPFL LIB
Test Run • 05 June 2003 • 00019
Abstract: 3D Electron Microscopy (3DEM) aims at the determination of the spatial
distribution of the Coulomb potential of macromolecular complexes. This
information is crucial in structural biology and provides key
information about the way that macromolecules interact. 3D Electron
Tomography computes the 3D reconstruction of a macromolecule based on
the information provided by thounsands of 2D projections acquired with
an electron microscope. One of the key parameters required to perform
such a 3D reconstruction is the direction of projection of each
projection image which is unknown a priori and must be determined using
some algorithm. This information is usually coded with three Euler
angles.
The visible space is defined as the subspace spanned by the set of
projections that can be seen given a reference model. We propose to
project experimental images onto the visible space and the use of
wavelets in order to match the experimental projections with those
obtained from a model volume used as reference. The use of projection
onto the visible space prevents the algorithm from matching features
that are not feasible given the reference model. On the other hand, the
wavelet decomposition of the projection images provide a framework for
a multiscale matching algorithm in which speed and robustness against
noise is gained. Results obtained from computer simulations in terms of
accuracy and speed encourages the use of this approach.
Using Relative Risk Models for Estimating Synergy Between Two Risk Factors
Prof. Pascal Roy, Director of the Laboratory of Medical Biostatistics, Lyon University Hospital, Lyon, France
Seminar • 06 June 2003 • 00020
Abstract: The synergy between two risk factors measures the extent to which each of them is more harmful in combination with the other than when acting alone. Relative risk models which describe interaction between risk factors with a few parameters are well adapted to the quantification of synergy. A simple measure of synergy and a method for its estimation with the help of relative risk models are proposed. This concept of synergy provides a simple interpretation of the non-linear parameter of several classical relative risk models. The discussion is illustrated with examples taken from cancer epidemiology of the upper aerodigestive tract.
Methodological Questions in Sentinel Lymph Node Analysis in Breast Cancer Patients
Prof. Pascal Roy, Director of the Laboratory of Medical Biostatistics, Lyon University Hospital, Lyon, France
Seminar • 06 June 2003 • 00021
Abstract: The sentinel lymph node (SLN) procedure has been proposed to women with breast cancer with clinically negative axillary lymph nodes, in order to avoid conventional axillary lymph node dissection and its associated side-effects. Methodological aspects of the validation of the SLN procedure are questioned here. Both the sensitivity and the negative predictive value of the SLN procedure are overestimated if the probability of missing lymph node metastases is not taken into account, even when a complete axillary dissection is performed as a control. The SLN strategy and its effects on staging and treatment cannot be evaluated by comparison with conventional axillary lymph node dissection in a one-arm study but require carefully designed randomized trials.
Sampling Theory in Practical Applications
Akira Hirabayashi, Yamaguchi University, Japan
Seminar • 02 June 2003 • 00022
Abstract: Sampling theory provides a basis of signal processing by digital computational engines. Appropriate usage of the theory gives rise to significant benefits to practical applications. One of such examples is seen in white-light interferometry, a technique for testing micrometer-ordered surface configurations of objects such as semiconductors or liquid crystal displays. Conventional algorithms use digital signal processing techniques as merely approximations of continuous signal processing. They require narrow sampling intervals in order to achieve good approximation accuracy. In contrast, we devised a new algorithm based on the sampling theory, and extended sampling interval to 6-14 times wider than those used in conventional algorithms. The new algorithm has been installed in a commercial system that achieved the world's fastest vertical scanning speed. Some further topics about sampling theory in other applications will also be presented.
More Info ...Global and Sliding-Window Transform Methods in Image Processing: Restoration, Resampling, and Target Location
Prof. Leonid Yaroslavsky, Dept. of Interdisciplinary Studies, Tel Aviv University, Israel
Seminar • 19 May 2003 • 00023
Abstract: Any signal processing is a process carried out in the domain of a certain integral signal transform. In digital processing, it is the set of coefficients of signals representation over selected transform basis functions that are subject of modification in the processing. The selection of the transform is governed, depending of application, by such transform features as signal energy compaction capability, computational complexity, the ease of global/local adaptivity, the appropriateness to the processing task. In this talk, global and sliding window transform domain methods for image processing are reviewed for such applications as image restoration, image resampling, and target location. For image restoration (blind denoising/deblurring), sliding window DCT (SWDCT) domain adaptive filters and hybrid SWDCT/wavelet filters are advocated as a tool for multi component and space variant image deblurring and edge preserving denoising. For image resampling, global and sliding window DCT domain methods are described. Global DCT domain method is capable of boundary effect free signal regular resampling with arbitrary interpolation kernels including that of discrete sinc- interpolation. SWDCT method is applicable for arbitrary irregular signal resampling with simultaneous signal restoration. It is also well suited for local adaptive resampling with adaptation of the interpolation kernel to signal local features. For target location, global and sliding window DFT/DCT transform methods are described that implement optimal adaptive correlator for reliable target location in single and multi-component images with heavily cluttered background.
More Info ...Orthogonal Hilbert Transform Filter Banks and Wavelets
Thierry Blu, EPFL LIB
Test Run • 31 March 2003 • 00027
Abstract: Complex wavelet transforms offer the opportunity to perform directional and coherent processing based on the local magnitude and phase of signals and images. Although denoising, segmentation, and image enhancement are significantly improved using complex wavelets, the redundancy of most current transforms hinders their application in compression and related problems. In this paper we introduce a new orthonormal complex wavelet transform with no redundancy for both real-and complex-valued signals. The transform's filterbank features a real lowpass filter and two complex highpass filters arranged in a critically sampled, three-band structure. Placing symmetry and orthogonality constraints on these filters, we find that each high-pass filter can be factored into a real highpass filter followed by an approximate Hilbert transform filter.
Recursive Filtering for Splines on Hexagonal Lattices
Thierry Blu, EPFL LIB
Test Run • 31 March 2003 • 00028
Abstract: Hex-splines are a novel family of bivariate splines, which are well suited to handle hexagonally sampled data. Similar to classical 1D B-splines, the spline coefficients need to be computed by a prefilter. Unfortunately, the elegant implementation of this prefilter by causal and anti-causal recursive filtering is not applicable for the (non-separable) hex-splines. Therefore, in this paper we introduce a novel approach from the viewpoint of approximation theory. We propose three different recursive filters and optimize their parameters such that a desired order of approximation is obtained. The results for third and fourth order hex-splines are discussed. Although the proposed solutions provide only quasi-interpolation, they tend to be very close to the interpolation prefilter.
True 2D Velocity Display by Multiscale Motion Mapping--New Insights into Complex Cardiac Motion Patterns
Michael Sühling, EPFL LIB
Test Run • 27 March 2003 • 00029
Abstract: Multiscale Motion Mapping ("Triple-M imaging") is a novel image- processing technology that combines multiscale optical-flow techniques, spline imaging, and comprehensive mathematical analysis in space and time. In contrary to Tissue Doppler Imaging or endocardial border-detection algorithms, the use of all the available grayscale information yields quantitative motion maps which are neither angle-dependent nor limited to endocardial visibility. This allows observation and quantitation of motion in every sector of the ultrasound image. The technique was applied to clinical ultrasound, after validation by a rotating phantom.
Multiscale Motion Mapping (Triple-M Imaging) for Color-Coded Analysis of Stress Echocardiograms
Michael Sühling, EPFL LIB
Test Run • 27 March 2003 • 00030
Abstract: Multiscale Motion Mapping ("Triple-M imaging") is a novel imaging modality for the measurement of motion in echocardiograms. In contrary to Tissue Doppler Imaging or endocardial border-detection algorithms, the use of all the available grayscale information yields quantitative motion maps which are neither angle-dependent nor limited to endocardial visibility. To test the feasibility of detecting abnormal motion in stress echos, echo data from various stress states in experimental myocardial infarction in an animal model (6 mongrel dogs) was analyzed in non-infarcted and infarcted segments.
Algorithmic Aspects of Tomographic Reconstruction from Parallel and Diffracted Projections
Michael Liebling, EPFL LIB
Test Run • 13 February 2003 • 00031
Abstract: We review our recent work on a high-quality discretization of the Radon transform and filtered back-projection. We focus on issues regarding the trade-off between interpolation model, sampling-step size, number of projections, and computational complexity. We then present a wavelet-based approach for the reconstruction of images from different kinds of measurements: digital holograms, and projections obtained by optical diffraction tomography. It is based on Fresnelet bases, which are wavelets that we have specifically designed for problems involving wave propagation. Numerical experiments on synthetic and real-world data demonstrate the soundness of our approach.
Myocardial Motion Analysis and Visualization from Echocardiograms
Michael Sühling, EPFL LIB
Test Run • 05 February 2003 • 00032
Abstract: We present a new framework to estimate and visualize heart motion from echocardiograms. For velocity estimation, we have developed a novel multiresolution optical flow algorithm. In order to account for typical heart motions like contraction/expansion and shear, we use a local affine model for the velocity in space and time. The motion parameters are estimated in the least-squares sense inside a sliding spatio-temporal window. The estimated velocity field is used to track a region of interest which is represented by spline curves. In each frame, a set of sample points on the curves is displaced according to the estimated motion field. The contour in the subsequent frame is obtained by a least-squares spline fit to the displaced sample points. This ensures robustness of the contour tracking. From the estimated velocity, we compute a radial velocity field with respect to a reference point. Inside the time-varying region of interest, the radial velocity is color-coded and superimposed on the original image sequence in a semi-transparent fashion. In contrast to conventional Tissue Doppler methods, this approach is independent of the incident angle of the ultrasound beam. The motion analysis and visualization provides an objective and robust method for the detection and quantification of myocardial malfunctioning. Promising results are obtained from synthetic and clinical echocardiographic sequences.
Multiresolution-Based Registration of a Volume to a Set of Its Projection
Slavica Jonic, EPFL LIB
Test Run • 05 February 2003 • 00033
Abstract: We have developed an algorithm for the rigid-body registration of a 3D CT to a set of C-arm images by matching them to computed cone-beam projections of the CT (DRRs). We precomputed rescaled versions (pyramid) of the CT volume and of the C-arm images. We perform the registration of the CT to the C-arm images starting from their coarsest resolution until we reach some finer resolution that offers a good compromise between time and accuracy. To achieve precision, we use a cubic-spline data model to compute the data pyramids, the DRRs, and the gradient and the Hessian of the cost function. We validate our algorithm on a 3D CT and on C-arm images of a cadaver spine using fiducial markers. When registering the CT to two C-arm images, our algorithm operates safely if the angle between the two image planes is larger than 10°. It achieves an accuracy of approximately 2.0±1.0 mm.
Statistical study of shot-noise-limited electron microscopy
Jonathan Dong
Meeting • • 00372
Abstract: