Maximizing submodular functions exponentially faster

Dept. of Computer Science, Harvard University

In this talk I’ll describe a novel approach called adaptive sampling that yields algorithms whose parallel running time is exponentially faster for a broad range of machine learning applications. The algorithms are designed for submodular function maximization which is the algorithmic engine behind applications such as clustering, network analysis, feature selection, Bayesian inference, ranking, speech and document summarization, recommendation systems, hyperparameter tuning, and many others. Since applications of submodular functions are ubiquitous across machine learning and data sets become larger, there is consistent demand for accelerating submodular maximization algorithms. In this talk I’ll introduce the adaptive sampling framework we recently developed and present experimental results from computational biology as well as other application domains.

MIA Talks Search