Abstract. The field of deep learning has achieved unprecedented performance gains at the expense of dramatic energy and storage costs due to vast model sizes and data volumes. There is a recent trend in technology that demands a different type of machine learning: small but powerful models that fit on low-powered devices, neural coding algorithms that compress data or models to small file sizes, and fast approximate Bayesian inference algorithms that identify parameter uncertainties, e.g., for model pruning or averaging. In this talk, we view resource-efficient machine learning through the lens of Bayesian deep learning, where we cover different aspects of the problem: (a) storage-efficient deep learning, where we draw on variational inference to compress both models and data with unprecedented efficiency, (b) data-efficient deep learning, where we maximize the amount of information from sparse sequential observations, and (c) runtime-efficient (Bayesian) deep learning, where we hybridize sampling and variational inference algorithms to train the types of models used in (a) and (b) more quickly and scalably. The designed new algorithms will be tested and applied in different domains, including image and video compression, text analysis, and in the natural sciences.
Biography. Stephan Mandt is an Assistant Professor of Computer Science at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and head of the statistical machine learning group at Disney Research, first in Pittsburgh on CMU campus and later in Los Angeles. He held previous positions as a postdoc with David Blei at Columbia University and as a PCCM Postdoctoral Fellow at Princeton University. Stephan holds a PhD in Theoretical Physics from the University of Cologne. He held fellowships by the German National Merit Foundation and the Kavli Foundation. Stephan has been active as an Area Chair for NeurIPS and ICML and held a visiting researcher position at Google Brain. His research is currently supported by NSF, DARPA, and Qualcomm Research.