Nonparametric Bayesian models, methods, and applications/Edge-exchangeable graphs, clustering, and sparsity
MIT EECS, CSAIL, and IDSS Edge-exchangeable graphs, clustering, and sparsity
Abstract: Many popular network models rely on the assumption of (vertex) exchangeability, in which the distribution of the graph is invariant to relabelings of the vertices. However, the Aldous-Hoover theorem guarantees that these graphs are dense or empty with probability one, whereas many real-world graphs are sparse. We present an alternative notion of exchangeability for random graphs, which we call edge exchangeability, in which the distribution of a graph sequence is invariant to the order of the edges. We demonstrate that a wide range of edge-exchangeable models, unlike any models that are traditionally vertex-exchangeable, can exhibit sparsity. To develop characterization theorems for edge-exchangeable graphs analogous to the powerful Aldous-Hoover theorem for vertex-exchangeable graphs, we turn to a seemingly different combinatorial problem: clustering. Clustering involves placing entities into mutually exclusive categories. A "feature allocation" relaxes the requirement of mutual exclusivity and allows entities to belong simultaneously to multiple categories. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via "exchangeable partition probability functions” and the "Kingman paintbox"). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We show how these characterizations can be extended to feature allocations and, from there, to edge-exchangeable graphs.
MIT EECS, CSAIL, and IDSS Primer: Nonparametric Bayesian Models, methods, and applications
Abstract: Nonparametric Bayesian methods make use of infinite-dimensional mathematical structures to allow the practitioner to learn more from their data as the size of their data set grows. What does that mean, and how does it work in practice? In this tutorial, we'll cover why machine learning and statistics need more than just parametric Bayesian inference. We'll introduce such foundational nonparametric Bayesian models as the Dirichlet process and Chinese restaurant process and touch on the wide variety of models available in nonparametric Bayes. Along the way, we'll see what exactly nonparametric Bayesian methods are and what they accomplish.