#### Publications

##### November 25, 2018

**2018**

- Pierre Maho, Simon Barthelme, Pierre Comon (2018). Non-linear source
separation under the Langmuir model for chemical sensors.
*IEEE Sensor Array and Multichannel Signal Processing Workshop*. [hal:01802358]

How to separate different odorants in a certain class of electronic noses.

- Tremblay, N., Barthelmé, S., & Amblard, P. O. (2018). Determinantal Point Processes for Coresets. arXiv:1803.08700.

Coresets are small weighted subsets that provide a (provably) good summary of a large dataset. We show how to use DPPs to build coresets.

- Barthelmé, S., Amblard, P. O., & Tremblay, N. (2018). Asymptotic Equivalence of Fixed-size and Varying-size Determinantal Point Processes. arXiv:1803.01576

Determinantal Point Processes are useful point processes for subsampling. They come in two variants: fixed size and varying size, with the latter being more tractable but somewhat less practical. We show that the two variants have essentially the same marginals in large ground sets, and give numerically stable algorithms for computing inclusion probabilities.

**2017**

- Tremblay, N., Amblard, P. O., & Barthelmé, S. (2017) Graph sampling with determinantal processes. In Signal Processing Conference (EUSIPCO), 2017 25th European (pp. 1674-1678). IEEE. arXiv:1703.01594

Assume a signal that lives on the nodes of a graph, and suppose you cannot measure the signal at every node. How do you pick nodes such as to be able to reconstruct the full signal? We suggest using Determinantal Point Processes for that task. See also (in French):

Tremblay, N., Barthelme, S., & Amblard, P. O. (2017). Echantillonnage de signaux sur graphes via des processus déterminantaux. GRETSI. arXiv:1704.02239.

Dehaene, G, Barthelme, S (2017) Expectation Propagation in the large-data limit. Journal of the Royal Statistical Society Series B. arXiv:1503.08060

EP is a popular method for variational inference which can be remarkably effective despite the fact that there’s very little theory supporting it. Our main contribution is to show that EP is asymptotically exact, meaning that when the number of datapoints goes to infinity you’re guaranteed to recover the exact Gaussian posterior. It turns out to be quite hard to prove and we introduce some new theoretical tools that help analysing EP formally, including a simpler algorithm that’s asymptotically equivalent to EP (aEP).

**2016**

- Grabska-Barwińska, A., Barthelmé, S., Beck, J., Mainen, Z. F.,
Pouget, A., & Latham, P. E. (2016). A probabilistic approach to
demixing odors.
*Nature neuroscience*,*20*(1), 98-106.

The olfactory system faces the problem of having to detect specific odorants that are never present in isolation, but rather in a complex, ever-changing olfactory soup. How does the brain do it?

- Gruenhage, G., Opper M, Barthelme, S (2016). Visualizing the effects
of a changing distance using continuous embeddings.
*Computational Statistics and Data Analysis*arXiv:1311.1911

What to do when your analysis depends on a certain distance function, but that distance function is not uniquely defined? Look at how distance patterns change. Software package at https://github.com/ginagruenhage/cmdsr

**2015**

- Dehaene, G, Barthelme, S (2015). Bounding Errors of Expectation-Propagation. *Advances in Neural Information Processing Systems (NIPS). * arXiv:1601.02387

We prove that EP is remarkably accurate (under strong assumptions), in the sense that the approximation given by EP converges very fast to the optimal approximation as the dataset grows.

- Barthelmé, S., Chopin, N., and Cottet, V. Divide and conquer in ABC:
Expectation-Progagation algorithms for likelihood-free inference.
*Handbook of Approximate Bayesian Computation*(S. Sisson, L. Fan, M. Beamont, eds.) arXiv:1512.00205

A follow-up to our JASA paper on EP-ABC, to appear in the Handbook of Approximate Bayesian Computation, edited by S. Sisson, L. Fan, and M. Beaumont. We explain how to parallelise the algorithm effectively, and we illustrate with an application to spatial extremes.

- Barthelme, S, Chopin, N (2015) The Poisson Transform for
Unnormalised Statistical Models.
*Statistics and Computing*arXiv:1406.2839

In inference for unnormalised statistical models, you have a likelihood function whose normalisation constant is too hard to compute (for example, an Ising model). It’s an important class of models in machine learning, computer vision and statistics. We show that there is a principled way of treating the missing normalisation constant as a parameter to estimate, via a connection to point process estimation. Guttman & Hyvärinen’s “contrastive divergence” can be viewed as a practical approximation of that technique.

**2014**

- Engbert, R, Trukenbrod, H, Barthelme, S, Wichmann, F (2014). Spatial
statistics and attentional dynamics in scene viewing
*. Journal of Vision.*arXiv:1405.3270

Eye movements in visual scenes cluster at small scales (fixations have nearer neighbours than chance would predict). Why? Sequential dependencies.

- Barthelme, S. (2014). Fast matrix computations for functional
additive models.
*Statistics & Computing.*arXiv:1402.4984

How to speed up inference for Gaussian process models over sets of related functions (e.g. the latent rate of spike trains over repeated trials).

- Barthelmé, S., & Chopin, N. (2014). Expectation propagation for likelihood-free inference. Journal of the American Statistical Association, 109(505), 315-333. arXiv:1107.5959

Likelihood-free inference is what you end up doing when you have a model whose likelihood function is very hard or impossible to compute. We show that Thomas Minka’s expectation-propagation algorithm can be wonderfully effective in a likelihood-free context, given a few modifications. Using pseudo-likelihood techniques and EP-ABC you could estimate essentially any kind of model.

**2013**

- Simon Barthelmé, Hans Trukenbrod, Ralf Engbert, Felix Wichmann.
Modelling fixation locations using spatial point processes. In press
at
*Journal of Vision*. http://arxiv.org/abs/1207.2370

Statistical tools for the analysis of eye movement data. Also, an attempt at a user-friendly introduction to spatial point processes.

**2011**

Simon Barthelmé, Nicolas Chopin (2011). ABC-EP: Expectation Propagation for Likelihood-free Bayesian Computation,

**ICML 2011**(Proceedings of the 28th International Conference on Machine Learning), L. Getoor and T. Scheffer (eds), 289-296.Simon Barthelmé, Nicolas Chopin (2011). Discussions of `Riemann manifold Langevin and Hamiltonian Monte Carlo methods” by Girolami and Calderhead. *Journal of the Royal Statistical Society, Series B, *

**73**(2), 173.

Minor comment on a Read Paper of the RSS.

**2010**

- Simon Barthelmé, Pascal Mamassian. (2010). Flexible mechanisms
underlie the evaluation of visual confidence.
*Proceedings of the National Academy of Sciences*, 107(48):20834-20839. PDF. Supplementary info.

How complex are the mechanisms the visual system uses to evaluate its uncertainty? The most simple strategy is to follow an obvious cue to visual uncertainty, like contrast. We find that people do something more complicated than that.

**2009**

- Simon Barthelmé, Pascal Mamassian (2009). Evaluation of Objective
Uncertainty in the Visual System.
*PLoS Computational Biology*. Available online here.

How do we know when to trust our visual sense? That is, how do we know
when we are getting reliable information out of our visual system? This
paper looks at the issue from a Bayesian point of view. We set up a
visual task with a well-defined *objective uncertainty*: for every
stimulus we show subjects, we have a measure on how much information the
stimulus actually contains. We show that observers’ subjective
uncertainty correlates with the objective uncertainty in the task. We
describe and compare two simple computational models that explain how
subjective and objective uncertainty could be linked.

PLoS Comp Bio has done a rather terrible job with the layout on that one (you’d think they could do a little better considering the \$2,200 they charge for publication), so I’ve made an alternative PDF with better looking equations, figures that are actually centred on the page plus the Supplementary Information (in which a couple of minor mistakes and typos have been fixed).

- Patrick J. Mineault, Simon Barthelmé, Christopher C. Pack (2009).
Improved classification images with sparse priors in a smooth basis.
*Journal of Vision*. Available online here.

In classification image experiments (and related techniques like Bubbles), you show subjects random stimuli and kindly ask them to categorise these stimuli: for example you might show them random faces, to be classified as male or female. The hope is to be able to characterise what parts of the stimuli are used by the subject in their judgement. The way this is usually done is by assuming that the behaviour of the subject is roughly linear in stimulus space, and characterising the observer’s behaviour boils down to running a regression, identifying which dimensions of the stimulus influence subjects’ responses. If you describe stimuli as a set of pixels than there usually are far too many dimensions to estimate anything reliably. In this paper we suggest that using sparse priors in the right basis yields much better estimates of the observer’s strategy than traditional techniques. This amounts to assuming that most dimensions are irrelevant to how subjects classify stimuli, so that we can focus on the dimensions that matter.