when: April 17, 2015 at 2PM | where: Laboratoire AstroParticule et Cosmologie (check for the directions here) room Valentin |
who:
Piotr Fryzlewicz London School of Economics. More info. | Balazs Kegl Laboratoire LAL More info. | Fred Ngolè Laboratoire AIM More info. |
what:
- 14:00-14:35: Piotr Fryzlewicz (London School of Economics, UK). More info.
Title: SHAH: SHape-Adaptive Haar wavelets for image denoising and classification.
Abstract: We propose the SHAH (SHape-Adaptive Haar) transform for images, which results in an orthonormal, adaptive decomposition of the image into Haar-wavelet-like components, arranged hierarchically according to decreasing importance, whose shapes reflect the features present in the image. The decomposition is as sparse as it can be for piecewise-constant images. It is performed via an iterative bottom-up algorithm with quadratic computational complexity; however, nearly-linear variants also exist. SHAH is rapidly invertible. We show how to use SHAH for image denoising. Having performed the SHAH transform, the coefficients are hard- or soft-thresholded, and the inverse transform taken. The SHAH image denoising algorithm compares favourably to the state of the art. We also use SHAH to define the BAGIDIS semi-distance between images. It compares both the amplitudes and the locations of the SHAH components of the input images and is flexible enough to account for feature misalignment. A clear asset of the methodology is its very general scope: it can be used with any images or more generally with any data that can be represented as graphs or networks.
and
- 14:40-15:15: Balazs Kegl (LAL). More info.
Title: Machine Learning in physics.Abstract:Classification algorithms have been routinely used since the 90s in high-energy physics to separate signal and background in particle detectors. The goal of the classifier is to maximize the sensitivity of a counting test in a selection region. It is similar in spirit but formally different from the classical objectives of minimizing misclassification error or maximizing AUC. We start the talk by motivating the problem on an ongoing example of detecting the Higgs boson in the tau-tau decay channel in the ATLAS detector of the LHC. We formalize the problem, then go on by describing the usual analysis chain, and explain some of the choices physicists make when designing a classifier for optimizing the discovery significance. We then go on presenting a data challenge we organized to draw the attention of the machine learning and statistics communities to this important application and to improve the techniques used to optimize the discovery significance. We will finish the talk by discussing a new scheme for rapid analytics we have been developing for prototyping data science solutions for domain science problems.
- 15:15-15:45 Coffee and a discussion.
- 15:45-16:20: Fred Ngole (AIM). More info.
Title: PSFs field super-resolution and low dimensionality prior.Abstract:In large scale spatial surveys such as the forthcoming ESA Euclid mission, images may be undersampled due to the optical sensors sizes. Therefore, one has to use a super-resolution (SR) method to recover aliased frequencies, prior to further processing. This is particularly relevant for point source images which provide direct measurements of the instrument point spread function (PSF). Due to the spatial and temporal variations of the PSFs, one only has a single low resolution (LR) measurement of the PSF at a given position in the instrument field of view. The SR problem is therefore severely underdetermined. However, the target well resolved PSFs are quite similar to each other. Formally, as vectors, they lie in a low dimensional subspace. We show that, given a set of LR PSFs at random positions in the field of view, it is possible using this prior to recover accurately the corresponding well resolved PSFs.