Wesley Tansey
Columbia Data Science, Systems Biology Interpreting and learning from black box models
Abstract: In this talk, we'll consider the problem of feature selection using black box predictive models. For example, high-throughput devices in science are routinely used to gather thousands of features for each sample in an experiment. The scientist must then sift through the many candidate features to find explanatory signals in the data, such as which genes are associated with sensitivity to a prospective therapy. Often, predictive models are used for this task: the model is fit, error on held out data is measured, and strong performing models are assumed to have discovered some fundamental properties of the system. A model-specific heuristic is then used to inspect the model parameters and rank important features, with top features reported as ``discoveries.'' However, such heuristics provide no statistical guarantees and can produce unreliable results. Here, I'll present the holdout randomization test (HRT) as a principled approach to feature selection using black box predictive models. The HRT is model agnostic and produces a valid p-value for each feature, enabling control over the false discovery rate (or Type I error) for any predictive model. Time permitting, I'll also discuss how the techniques from the HRT can be adapted to the related, but subtly different, task of interpreting black box model predictions.