Deep probabilistic programming with Pyro/Stochastic gradient-based variational inference
Fritz Obermeyer
Pyro team, Ó³»´«Ã½ Deep probabilistic programming with Pyro
A longstanding goal of Bayesian machine learning research is to separate model description from inference implementation while keeping pace with the tremendous growth in size and complexity of models and datasets. Advances in three areas in the last decade --automatic differentiation, Monte Carlo integration, and stochastic variational inference-- have enabled unprecedented progress towards that goal in the form of deep probabilistic programming languages like Pyro. Pyro allows Bayesian models to be specified generatively as Python functions that invoke random samplers, and provides both scalable black-box inference algorithms applicable to a wide variety of models as well as modular components for implementing custom inference algorithms. This talk will explain what deep probabilistic programming with Pyro is and when and how you should use it. Our running example will be a semi-supervised labeling problem from single cell transcriptomics: scANVI.
Eli Bingham
Pyro team, Ó³»´«Ã½ Primer: Stochastic gradient-based variational inference
Black box variational inference algorithms (BBVI), which recast the intractable integrals that appear in Bayesian inference as optimization problems that can be solved using off-the-shelf scientific computing and automatic differentiation software, have greatly expanded the range and scale of problems addressable in practice by Bayesian methods. In this primer, we give an introduction to Bayesian inference with BBVI grounded in concrete example models. We will start with an introduction to Bayesian modeling, then discuss variational inference including the variational objective function, choice of approximate posterior distribution, data subsampling and gradient estimation with automatic differentiation. Finally, within this framework we will construct variational autoencoders (VAEs), which incorporate neural networks into both models and inference.