Program

2020-02-20

The workshop will take place in Grenoble, April 15th to 17th, starting at 10.30 AM on the 15th and ending at 12.30 PM on the 17th.

The topic for the workshop is Determinantal Point Processes, and more specifically computational methods, recent theoretical developments, and current challenges.

Rémi Bardenet (CNRS, CRIStAL, Lille) will give a tutorial on DPPs on the first day.

Full schedule to appear soon.

Abstracts

Determinantal point processes are everywhere: tractable repulsiveness for statistical models and computational tools

Rémi Bardenet

Determinantal point processes in spatial statistics: modelisation, mixing and simulation.

Christophe Biscio

Determinantal point processes and image processing: some applications.

Claire Launay

In this talk, we will consider determinantal point processes (DPPs) from an image processing perspective, meaning that the data we want to subsample are pixels or patches of a given image. To this end, our framework is discrete and finite. First, we adapt their basic definition and properties to DPPs defined on the pixels of an image, that we call determinantal pixel processes (DPixPs). We apply DPixPs to texture synthesis using shot noise models. We will present how to adapt a DPixP kernel to a given spot function and describe its asymptotic behaviour. We also investigate the estimation of DPixP kernels from one or several samples. Finally, we study DPPs on the set of patches of an image. We will detail how DPPs can be used in this setting and how to choose an appropriate kernel.

DLPs in the ALPs

Adrien Kassel

I will describe a family of determinantal probability measures on the Grassmannian of a Euclidean space, introduced in a recent joint work with Thierry Lévy. This family of random linear subspaces, which we call determinantal linear processes (DLP), generalises the family of determinantal point processes with self-adjoint kernel. I will explain the motivation for introducing these measures, which grew from the want of a determinantal process which be equivariant under certain unitary symmetries, and I will provide an example generalising the uniform spanning tree of a graph. My question to the audience will be to know whether these measures could be used in statistical applications, either in a context where there is a natural internal symmetry to the data, or in the context of subspace learning, or subspace-valued data analysis.

Challenges in learning discrete DPPs

Victor-Emmanuel Brunel

Learning the kernel of a DPP poses several nontrivial questions, both at a statistical and a computational level. The maximum likelihood estimator (MLE) is very hard to compute, because of the very complex likelihood landscape of the model. In a previous work, we studied the asymptotic properties of the MLE, when the kernel of the DPP is assumed symmetric and definite positive. A symmetric kernel enforces repulsive interactions between items (hence, DPPs were originally named fermionic processes), whereas in some applications, it can be interesting to also allow for attractive interactions. Moreover, the definiteness assumption on the kernel is very restrictive, since many machine learning applications would rather prefer low-rank kernels, in order to model DPPs that put mass on small subsets only. What is the behavior of the MLE when the true kernel has a low rank? In this case, how to penalize the log-likelihood in order to accelerate the learning rates, especially in very high dimensions? Under what restrictions can we prove non asymptotic error bounds for the MLE (or penalized MLE)? Another approach for learning discrete DPPs is based on a method of moments. This method consists of learning the kernel through the estimation of its principal minors, and this technique is closely related to the principal minor assignment problem: How to efficiently recover a matrix, when only the values of its principal minors are available? This problem is very well understood in the case of symmetric matrices, but very little is known in the general case.

Monte-Carlo integration of non-differentiable functions using determinantal point processes.

Adrien Mazoyer

In recent work, R. Bardenet (CRISTAL, Lille) and A. Hardy (Laboratory Paul Painlevé, Lille) have proposed to use a particular class of DPP, Orthogonal Polynomial Ensemble (OPE), for producing exactly $n$ quadrature points for Monte-Carlo integration. The authors proved that their estimator satisfies a central limit theorem with explicit variance and rate of convergence $\sqrt{n^{1+1/d}}$ instead of the typical $\sqrt{n}$, under the main assumption that the integrand is continuously differentiable and compactly supported.

In this joint work with J.-F. Coeurjolly (UQAM, Montreal) and P.-O. Amblard (Gipsa-Lab, Grenoble), we propose a similar approach, by considering product of Dirichlet type kernels instead of orthogonal polynomials. In this case, the integral estimator satisfies a central limit there with explicit variance and the same rate of convergence as that exhibited when using OPE. Moreover, the assumptions on the integrand are relaxed: we only require that it belongs to a Sobolev space with regularity $s>1/2$ and do not impose that it is compactly supported.

Kernel quadrature and interpolation using determinantal sampling

Ayoub Belhadji