Speaker: I’m a director and research scientist at Meta, where I lead the Adaptive Experimentation team. We develop robust AI methods for sample-efficient optimization. We conduct applied and use-inspired basic research to solve real-world problems across the company, and scale these methods through the development of software frameworks. Our work is used broadly within Meta, with applications ranging from optimizing recommender system ranking policies and infrastructure, to AutoML, hardware design, and perception science. My research interests include Bayesian optimization, Bayesian machine learning, meta-learning, multi-armed bandits, and active learning. I am passionate about democratizing these methods through the development of open-source software, including BoTorch, a framework for Bayesian optimization research, and Ax, an end-user platform for Bayesian optimization and multi-armed bandits.
Abstract: The problem of efficient optimization via experiments—whether they be physical or computational—are ubiquitous in science and engineering. Bayesian optimization provides a solution to optimizing over large input spaces with dozens to hundreds of trials, and has seen success across fields including machine learning, chemistry, materials science, biology, perception science, and engineering. I will provide an overview of Bayesian optimization, and through real-world examples at Meta, will illustrate recent advances in Bayesian optimization in high-dimensional, multi-objective, and multi-fidelity scenarios.