Prof. Dr. Yuehaw Khoo (University of Chicago, USA)
Tensor Density Estimator by Convolution-Deconvolution abstract
Abstract:
We propose a linear algebraic framework for performing density estimation. It consists of three simple steps: convolving the empirical distribution with certain smoothing kernels to remove the exponentially large variance; compressing the empirical distribution after convolution as a tensor train, with efficient tensor decomposition algorithms; and finally, applying a deconvolution step to recover the estimated density from such tensor-train representation. Numerical results demonstrate the high accuracy and efficiency of the proposed methods.
14:15 • ETH Zentrum, Rämistrasse 101, Zürich, Building HG, Room G 19.1
Prof. Dr. Sebastian Herrero (University of Santiago de Chile )
A Lyapunov exponent associated to modular functions abstract
Abstract:
We define and prove properties of a GL(2, Z)-invariant function, a Lyapunov exponent associated to the modular function j, generalizing a function defined by Spalding and Veselov in the case of the constant function 1. Our results were motivated by conjectures of Kaneko about the "values" of j at real quadratic irrationalities. This is joint work with Paloma Bengoechea and Özlem Imamoglu.
14:15 • ETH Zentrum, Rämistrasse 101, Zürich, Building HG, Room G 43
Dorian Martino (ETH Zürich)
Recent progress on the regularity of n-harmonic maps abstract
Abstract:
The full regularity of harmonic maps from a given surface into an arbitrary Riemannian manifold has been proved by Hélein in 1991. This is not true anymore when the domain has dimension strictly greater than 2, Rivière constructed an example of harmonic map from a 3-dimensional domain which is everywhere discontinuous in 1995. There are many possible generalizations of these maps to the higher dimensional case in order to recover the regularity of some "optimal" maps. For most of these generalizations, the full regularity in the general case is still open. In this talk, we will discuss some recent progress obtained for n-harmonic maps. This is a joint work with Armin Schikorra.
14:15 • EPF Lausanne, MA B1 11
Philippe Naveau (LSCE CNRS)
Multivariate distributional modelling of low, moderate, and large intensities for hydrological applications abstract
Abstract:
In fields such as hydrology and climatology, modelling the entire distribution of positive data (e.g. precipitation) is essential, as stakeholders require insights into the full range of possible values, from low recordings (droughts) to large extremes (heavy rainfall). Traditional approaches often segment the distribution into separate regions, which introduces subjectivity (threshold selection) and brings inferential complexity. This is especially true when dealing with multivariate data.In this talk, I will present a few recent proposals to deal such issues. Different inference schemes will be proposed and tested on simulated data. Concerning the application, we will focus on hydrological problems. In particular the modelling of aggregated rainfall times series in France will be shown, as well as the analysis of multivariate flood levels for a UK river. This talk is based on joint works with Pierre Ailliot, Noura Alotaibi, Carlo Gaetan, Raphael Huser and Matthew Sainsbury-Dale.
15:15 • EPF Lausanne, CM 1 517
Dr. Yannik Schüler-Hammer (ETH Zürich)
Quintuple Hodge integrals abstract
Abstract:
I will discuss recent progress on conjectural formulas for quintuple Hodge integrals. A proof of these formulas will be presented in two distinct limits, and I will discuss the obstructions to extending the arguments beyond the two regimes. I will also mention their implications for Gromov–Witten theory of toric Calabi–Yau fivefolds and string theory, with particular attention to constant map contributions.
16:00 • ETH Zentrum, Rämistrasse 101, Zürich, Building HG, Room G 43
Prof. Dr. Rüdiger Urbanke (EPFL)
Abstract:
Joint work with Meir Feder and Yaniv Fogel and with the help of Ido Atlas, all Tel Aviv UniversityI will introduce an information-theoretic framework that views learning as universal prediction under log loss, characterized through regret bounds. Central to the framework is an effective notion of architecture-based model complexity, defined by the probability mass or volume of models in the vicinity of the data-generating process. This volume is related to spectral properties of the Hessian or Fisher Information Matrix, leading to tractable approximations. I will argue that successful architectures possess a broad complexity range, enabling learning in highly over-parameterized model classes. The framework sheds light on the role of inductive biases, the effectiveness of stochastic gradient descent, and phenomena such as flat minima. It unifies online, batch, supervised, and generative settings, and applies across the stochastic realizable, agnostic, and even individual regimes. Moreover, it provides insights into the success of modern machine-learning architectures, including deep neural networks and transformers, suggesting that their broad complexity range naturally arises from their layered structure. These insights open the door to the design of alternative architectures with potentially comparable or even superior performance.
16:15 • Universität Bern, Sidlerstrasse 5, 3012 Bern, Room B6