Loss valleys, uncertainty, and generalization in deep learning

Courant Institute, New York University

In this talk we discuss how to exploit the geometry of training objectives for scalable Bayesian model averaging, leading to better point predictions, as well as uncertainty and calibration in deep learning. We will focus primarily on five works, which include the surprising discovery of mode connectivity, and its implications.

References:

  1. Loss surfaces, mode connectivity, and fast ensembling of DNNs (NeurIPS 2018):
  2. Averaging Weights Leads to Wider Optima and Better Generalization (UAI 2018):
  3. A Simple Baseline for Bayesian Uncertainty in Deep Learning (NeurIPS 2019):
  4. Subspace Inference for Bayesian Deep Learning (UAI 2019):
  5. SWALP: Stochastic Weight Averaging in Low-Precision Training:

MIA Talks Search