This colloquium takes place every other Wednesday afternoon, 16:00-17:00 in the MathLab 9th floor, NU building (the seminar room next to the common area). For the moment we are on Zoom. Please ask for the meeting ID and Passwords. For more information, please contact one of the organizers Dennis Dobler, Ilke Canakci, and Fahimeh Mokhtari.
A database of earlier years' talks can be found here.
Upcoming talks in 2020:
Wednesday September 23, Speaker: Paulo Serra (VU Amsterdam), Zoom meeting
Title: Short tour through Mathematical Statistics
In this talk I will be taking you through some fundamental ideas in Mathematical Statistics. I will use the regression model to illustrate those ideas, as well as some common approaches to inference, and I will make a connection with some concrete problems that I am currently working on. I will close by presenting some recent ork. Technical aspects will be kept to a minimum so that everyone can follow.
Previous talks in 2020:
Wednesday September 9, Speaker: Joost Berkhout (VU Amsterdam), Zoom meeting
Title: Production Scheduling in an Industry 4.0 Era
In this talk, a modern industrial plant is considered that produces a large variety of composite biomaterials. Incoming orders are processed in real-time and slotted into a production schedule to meet the required delivery deadline. The scheduling problem is complicated because of numerous constraints, chief among them the limited storage capacity for intermediate or finished products and avoiding contamination between product runs. To tackle this scheduling problem, an algorithm is presented that combines a state-of-the-art model-based evolutionary algorithm (called Gene-pool Optimal Mixing Evolutionary Algorithm) with a mixed integer linear programming. Results from numerical experiments will be presented to demonstrate the effectiveness of the algorithm.
Wednesday June 17, Speaker: Wioletta Ruszel (UUtrecht), Zoom meeting
Title: Emergence of interfaces from sandpile models
Interfaces separating two phases (e.g. water and ice) are created in phase coexistence situations such as at 0 degree Celsius. Random interface models (in continuum space) are the Gaussian free field or fractional Gaussian fields. In this talk we would like to explain how Gaussian interface models emerge from divisible sandpiles. A divisible sandpile models is defined as follows: Given a graph, assign a (real-valued) hight s(x) to each vertex of G. A positive value s(x)>0 is interpreted as mass and a negative one as a hole. At every time step do the following: If the mass s(x)>1, then keep mass 1 and redistribute the excess among the neighbours. Under some conditions, the sandpile configuration will stabilise, meaning that all the heights will be lower or equal to 1. The odometer function u(x) collects the amount of mass emitted from x during stabilisation. It turns out that, depending on the initial configuration and redistribution rule, the odometer interface (u(x))_(x in G) will scale to a Gaussian field. The results presented in this talk are in collaboration with A. Cipriani (TU Delft), L. Chiarini (TU Delft/IMPA), J. de Graaff (TU Delft), R. Hazra (ISI Kolkata) and M. Jara (IMPA).
Wednesday April 8, Speaker: Johannes Schmidt-Hieber (University of Twente), Zoom Meeting
Title: Towards a statistical foundation of deep learning
Recently a lot of progress has been made in the theoretical understanding of deep learning. One of the very promising directions is the statistical approach, which interprets deep learning as a statistical method and builds on existing techniques in mathematical statistics to derive theoretical error bounds. The talk surveys this field and describes future challenges.
Wednesday February 26, Speaker: Nicolas Garcia Trillos (Assistant Professor, University of Wisconsin-Madison)
Title: From clustering with graph cuts to isoperimetric inequalities: quantitative convergence rates of Cheeger cuts on data clouds
Graph cuts have been studied for decades in the mathematics and computer science communities, and in modern applications in machine learning have been used to formulate optimization problems for data clustering. A canonical example with historical motivation is the so called Cheeger cut problem. This problem is on the one hand intuitively motivated, but on the other, is highly non-convex with a pessimistic NP hard label stamped on it (at least in a worst case scenario setting). Despite this, in the past decade or so, several algorithmic improvements made the minimization of Cheeger cuts more feasible, and at the same time there was a renewed interest in studying statistical properties of Cheeger cuts. New analytical ideas have provided new tools to attack problems that were elusive using classical approaches from statistics and statistical learning theory. Despite the advances, several questions remain unanswered. The purpose of this talk is to present some of these theoretical developments, with emphasis on new results where, for the first time, high probability converge rates of Cheeger cuts of proximity graphs over data clouds are deduced. These quantitative convergence rates are obtained by building bridges between the original clustering problem and another field within the mathematical analysis community that has seen enormous advancements in the past few years: quantitative isoperimetric inequalities. This connection serves as a metaphor for how the mathematical analyst may be able to contribute to answer theoretical questions in machine learning, and how one may be able to deduce statistical properties of solutions to learning optimization problems that have a continuum counterpart.
Wednesday February 12, 2020: Daniele Avitabile (VU), 16:00-17:00 in MathLab (NU-09A46)
Title: This is not a bump
This talk discusses patterns in a well-known spatially-extended, deterministic network of synaptically-coupled spiking neurons, which supports coherent structures commonly referred to as “bump” and “wandering bump”, respectively. Patterns of this type are observable in a variety of cortical recordings, and determining their existence and stability properties is a key question in mathematical neuroscience. Cortical bumps have been studied intensively in neural fields (that is, coarse-grained models), but a dynamical system treatment for the case of spiking networks is more elusive. I will present a novel approach to analyse these coherent structures in spiking networks, leading to the following conclusions: The model under consideration does not support a stationary, localised, heterogeneous steady state, therefore the coherent structure referred to as “bump" is not a bump in the usual sense. In a wide region of parameter space, the model supports countably many coexisting travelling waves. These waves are linearly unstable, have a spatially localised profile, and a vanishingly small speed. I will show numerical evidence that the structures known as “bump” and “wandering bump” are a peculiar form of spatio-temporal chaos, and their existence is underpinned by the bifurcation structure of the travelling waves mentioned above.Wed 29 January: Flavien Léger , Room WN-S664, 16:00-17:00
Title: The Schrödinger bridge problem
The Schrödinger bridge problem (SBP) nowadays plays a vital role in the physics, mathematics, engineering, information geometry and machine learning communities. It was first introduced by Schrödinger in 1931 and is closely related to, but different from the famous Schrödinger equation. The SBP searches for the minimal kinetic energy density path for drift-diffusion processes with fixed initial and final distributions. In physics, the SBP is related to the Schrödinger equation via Nelson’s stochastic mechanics. For numerical purposes, the SBP can be seen as an entropic regularisation of optimal transport; its numerical solvers include the Sinkhorn algorithm and Fisher information regularisation method. In information geometry and machine learning, the SBP has been studied as a statistical divergence function. In modelling, the SBP minimising path, via Hopf–Cole transformation, shares similar structures with Nash equilibria in mean field games.