Publications
October 26, 2020
2020

Barthelmé, S., Tremblay, N., Usevich, K., & Amblard, P. O. (2020). Determinantal Point Processes in the Flat Limit: Extended Lensembles, PartialProjection DPPs and Universality Classes. arXiv:2007.04117

Barthelmé, S., & Usevich, K. (2020). Spectral properties of kernel matrices in the flat limit. arXiv:1910.14067. To appear in SIAM Journal on Matrix Analysis and Applications.

Maho, P., Herrier, C., Livache, T., Comon, P., & Barthelme, S. (2020). Realtime gas recognition and gas unmixing in a robot application. hal02534216v2. To appear in Sensors & Actuators B.

Maho, P., Herrier, C., Livache, T., Rolland, G., Comon, P., & Barthelmé, S. (2020). Reliable chiral recognition with an electronic nose. Biosensors and Bioelectronics, 112183. hal02534216

Pilavci, Y. Y., Amblard, P. O., Barthelmé, S., & Tremblay, N. (2020). Smoothing graph signals via random spanning forests. arXiv:1910.07963. ICASSP.
2019

Breuil, C., Jennings, B. J., Barthelmé, S., & Guyader, N. (2019). Color improves edge classification in human vision. PLoS Computational Biology, 15(10).

Barthelmé, S, Tremblay, N, Gaudillière, A, Avena, L, Amblard, P O (2019). Estimating the Inverse Trace using Random Forests on Graphs. GRETSI. arXiv:1905.02086

Maho, P., Dolcinotti, C., Livache, T., Herrier, C., Andreev, A., Comon, P., & Barthelme, S. (2019). GRETSI. Reconnaissance de plusieurs composés chimiques à l’aide d’un robot équipé d’un nez électronique.

Trukenbrod, H. A., Barthelmé, S., Wichmann, F. A., & Engbert, R. (2019). Spatial statistics for gaze patterns in scene viewing: Effects of repeated viewing. Journal of Vision, 19(6), 55.

Barthelmé, S., Amblard, P. O., & Tremblay, N. (2019). Asymptotic Equivalence of Fixedsize and Varyingsize Determinantal Point Processes. Bernoulli
Determinantal Point Processes are useful point processes for subsampling. They come in two variants: fixed size and varying size, with the latter being more tractable but somewhat less practical. We show that the two variants have essentially the same marginals in large ground sets, and give numerically stable algorithms for computing inclusion probabilities.
 Maho, P., Dolcinotti, C., Livache, T., Herrier, C., Andreev, A., Comon, P., & Barthelme, S. (2019). Olfactive robot for gas discrimination over several months using a new optoelectronic nose. ISOEN
2018
 Pierre Maho, Simon Barthelme, Pierre Comon (2018). Nonlinear source separation under the Langmuir model for chemical sensors. IEEE Sensor Array and Multichannel Signal Processing Workshop. [hal:01802358]
How to separate different odorants in a certain class of electronic noses.
 Tremblay, N., Barthelmé, S., & Amblard, P. O. (2018). Determinantal Point Processes for Coresets. arXiv:1803.08700. Now published in JMLR.
Coresets are small weighted subsets that provide a (provably) good summary of a large dataset. We show how to use DPPs to build coresets.
2017
 Tremblay, N., Amblard, P. O., & Barthelmé, S. (2017) Graph sampling with determinantal processes. In Signal Processing Conference (EUSIPCO), 2017 25th European (pp. 16741678). IEEE. arXiv:1703.01594
Assume a signal that lives on the nodes of a graph, and suppose you cannot measure the signal at every node. How do you pick nodes such as to be able to reconstruct the full signal? We suggest using Determinantal Point Processes for that task. See also (in French):

Tremblay, N., Barthelme, S., & Amblard, P. O. (2017). Echantillonnage de signaux sur graphes via des processus déterminantaux. GRETSI. arXiv:1704.02239.

Dehaene, G, Barthelme, S (2017) Expectation Propagation in the largedata limit. Journal of the Royal Statistical Society Series B. arXiv:1503.08060
EP is a popular method for variational inference which can be remarkably effective despite the fact that there’s very little theory supporting it. Our main contribution is to show that EP is asymptotically exact, meaning that when the number of datapoints goes to infinity you’re guaranteed to recover the exact Gaussian posterior. It turns out to be quite hard to prove and we introduce some new theoretical tools that help analysing EP formally, including a simpler algorithm that’s asymptotically equivalent to EP (aEP).
2016
 GrabskaBarwińska, A., Barthelmé, S., Beck, J., Mainen, Z. F., Pouget, A., & Latham, P. E. (2016). A probabilistic approach to demixing odors. Nature neuroscience, 20(1), 98106.
The olfactory system faces the problem of having to detect specific odorants that are never present in isolation, but rather in a complex, everchanging olfactory soup. How does the brain do it?
 Gruenhage, G., Opper M, Barthelme, S (2016). Visualizing the effects of a changing distance using continuous embeddings. Computational Statistics and Data Analysis arXiv:1311.1911
What to do when your analysis depends on a certain distance function, but that distance function is not uniquely defined? Look at how distance patterns change. Software package at https://github.com/ginagruenhage/cmdsr
2015
 Dehaene, G, Barthelme, S (2015). Bounding Errors of ExpectationPropagation. *Advances in Neural Information Processing Systems (NIPS). * arXiv:1601.02387
We prove that EP is remarkably accurate (under strong assumptions), in the sense that the approximation given by EP converges very fast to the optimal approximation as the dataset grows.
 Barthelmé, S., Chopin, N., and Cottet, V. Divide and conquer in ABC: ExpectationProgagation algorithms for likelihoodfree inference. Handbook of Approximate Bayesian Computation (S. Sisson, L. Fan, M. Beamont, eds.) arXiv:1512.00205
A followup to our JASA paper on EPABC, to appear in the Handbook of Approximate Bayesian Computation, edited by S. Sisson, L. Fan, and M. Beaumont. We explain how to parallelise the algorithm effectively, and we illustrate with an application to spatial extremes.
 Barthelme, S, Chopin, N (2015) The Poisson Transform for Unnormalised Statistical Models. Statistics and Computing arXiv:1406.2839
In inference for unnormalised statistical models, you have a likelihood function whose normalisation constant is too hard to compute (for example, an Ising model). It’s an important class of models in machine learning, computer vision and statistics. We show that there is a principled way of treating the missing normalisation constant as a parameter to estimate, via a connection to point process estimation. Guttman & Hyvärinen’s “contrastive divergence” can be viewed as a practical approximation of that technique.
2014
 Engbert, R, Trukenbrod, H, Barthelme, S, Wichmann, F (2014). Spatial statistics and attentional dynamics in scene viewing*. Journal of Vision.* arXiv:1405.3270
Eye movements in visual scenes cluster at small scales (fixations have nearer neighbours than chance would predict). Why? Sequential dependencies.
 Barthelme, S. (2014). Fast matrix computations for functional additive models. Statistics & Computing. arXiv:1402.4984
How to speed up inference for Gaussian process models over sets of related functions (e.g. the latent rate of spike trains over repeated trials).
 Barthelmé, S., & Chopin, N. (2014). Expectation propagation for likelihoodfree inference. Journal of the American Statistical Association, 109(505), 315333. arXiv:1107.5959
Likelihoodfree inference is what you end up doing when you have a model whose likelihood function is very hard or impossible to compute. We show that Thomas Minka’s expectationpropagation algorithm can be wonderfully effective in a likelihoodfree context, given a few modifications. Using pseudolikelihood techniques and EPABC you could estimate essentially any kind of model.
2013
 Simon Barthelmé, Hans Trukenbrod, Ralf Engbert, Felix Wichmann. Modelling fixation locations using spatial point processes. In press at Journal of Vision. http://arxiv.org/abs/1207.2370
Statistical tools for the analysis of eye movement data. Also, an attempt at a userfriendly introduction to spatial point processes.
2011

Simon Barthelmé, Nicolas Chopin (2011). ABCEP: Expectation Propagation for Likelihoodfree Bayesian Computation, ICML 2011 (Proceedings of the 28th International Conference on Machine Learning), L. Getoor and T. Scheffer (eds), 289296.

Simon Barthelmé, Nicolas Chopin (2011). Discussions of `Riemann manifold Langevin and Hamiltonian Monte Carlo methods” by Girolami and Calderhead. *Journal of the Royal Statistical Society, Series B, *73(2), 173.
Minor comment on a Read Paper of the RSS.
2010
 Simon Barthelmé, Pascal Mamassian. (2010). Flexible mechanisms underlie the evaluation of visual confidence. Proceedings of the National Academy of Sciences, 107(48):2083420839. PDF. Supplementary info.
How complex are the mechanisms the visual system uses to evaluate its uncertainty? The most simple strategy is to follow an obvious cue to visual uncertainty, like contrast. We find that people do something more complicated than that.
2009
 Simon Barthelmé, Pascal Mamassian (2009). Evaluation of Objective Uncertainty in the Visual System. PLoS Computational Biology. Available online here.
How do we know when to trust our visual sense? That is, how do we know when we are getting reliable information out of our visual system? This paper looks at the issue from a Bayesian point of view. We set up a visual task with a welldefined objective uncertainty: for every stimulus we show subjects, we have a measure on how much information the stimulus actually contains. We show that observers’ subjective uncertainty correlates with the objective uncertainty in the task. We describe and compare two simple computational models that explain how subjective and objective uncertainty could be linked.
PLoS Comp Bio has done a rather terrible job with the layout on that one (you’d think they could do a little better considering the $2,200 they charge for publication), so I’ve made an alternative PDF with better looking equations, figures that are actually centred on the page plus the Supplementary Information (in which a couple of minor mistakes and typos have been fixed).
 Patrick J. Mineault, Simon Barthelmé, Christopher C. Pack (2009). Improved classification images with sparse priors in a smooth basis. Journal of Vision. Available online here.
In classification image experiments (and related techniques like Bubbles), you show subjects random stimuli and kindly ask them to categorise these stimuli: for example you might show them random faces, to be classified as male or female. The hope is to be able to characterise what parts of the stimuli are used by the subject in their judgement. The way this is usually done is by assuming that the behaviour of the subject is roughly linear in stimulus space, and characterising the observer’s behaviour boils down to running a regression, identifying which dimensions of the stimulus influence subjects’ responses. If you describe stimuli as a set of pixels than there usually are far too many dimensions to estimate anything reliably. In this paper we suggest that using sparse priors in the right basis yields much better estimates of the observer’s strategy than traditional techniques. This amounts to assuming that most dimensions are irrelevant to how subjects classify stimuli, so that we can focus on the dimensions that matter.