BS in symbolic systems from Stanford in 2008,
MS in computer science from Stanford in 2009,
PhD in computer science from MIT in 2014,
I am an Assistant Professor of Computer Science at the University of Toronto, and a founding member of the Vector Institute.

My group’s research focuses on machine learning, especially deep learning and Bayesian modeling. We aim to develop architectures and algorithms that train faster, generalize better, give calibrated uncertainty, and uncover the structure underlying a problem. We’re especially interested in scalable and flexible uncertainty models, so that intelligent agents can explore effectively and make robust decisions at test time. Towards these objectives, we also aim to automate the configuration of ML systems, from tuning of optimization and regularization hyperparameters to the design of models, architectures, and algorithms. Finally, we are starting to investigate the important and neglected problem of ensuring that AI systems remain aligned with human values.