Composing graphical models with neural networks for structured representations and fast inference
Google Brain Composing graphical models with neural networks for structured representations and fast inference
Abstract: I'll describe a new modeling and inference framework that combines the flexibility of deep learning with the structured representations of probabilistic graphical models. The model family augments latent graphical model structure, like switching linear dynamical systems, with neural network observation likelihoods. To enable fast inference, we show how to leverage graph-structured approximating distributions and, building on variational autoencoders, fit recognition networks that learn to approximate difficult graph potentials with conjugate ones. I'll show how these methods can be applied to learn how to parse mouse behavior from depth video.
Columbia, Blei Lab Primer: Bayesian time series modeling with recurrent switching linear dynamical systems
Abstract: Many natural systems like neurons firing in the brain or basketball teams traversing a court give rise to time series data with complex, nonlinear dynamics. We gain insight into these systems by decomposing the data into segments that are each explained by simpler dynamical units. Bayesian time series models provide a flexible framework for accomplishing this task. This primer will start with the basics, introducing linear dynamical systems and their switching variants. With this background in place, I will introduce a new model class called (rSLDS), which discover distinct dynamical units as well as the input- and state-dependent manner in which units transition from one to another. In practice, this leads to models that generate much more realistic data than standard SLDS. Our key innovation is to design these recurrent SLDS models to enable recent Pólya-gamma auxiliary variable techniques and thus make approximate Bayesian learning and inference in these models easy, fast, and scalable.