Approximate Bayesian inference for latent Gaussian models in Stan

Latent Gaussian models are a common class of Bayesian hierarchical models, characterized by a normally distributed local parameter. The posterior distribution of such models often induces a geometry that frustrates sampling algorithms, such as Stan’s Hamiltonian Monte Carlo (HMC), resulting in an incomplete or slow exploration of the parameter space. To alleviate these difficulties, we can marginalize out the normal local variables and run HMC on a well-behaved subset of the parameters. Unfortunately, exact marginalization is not possible in all but a few simple cases. It is however possible to do an approximate marginalization, using an embedded Laplace approximation. We introduce a prototype suite of Stan functions that support this approximation scheme, and demonstrate the method on a Gaussian process disease map and a sparse kernel interaction model.





Presenter biography:
Charles Margossian

Charles Margossian is a PhD candidate in Statistics at Columbia University, advised by Andrew Gelman. His research interests lie in ODE-based models and pharmacometrics, approximate inference, discrete parameters, and automatic differentiation. He has for the past four years been an active contributor to Stan.