Title: 

New Applications and Algorithms of Distributionally Robust Optimization in AI

Abstract:

In this talk, I will present our recent research on new applications and algorithms of Distributionally Robust Optimization (DRO) in AI, with a particular focus on training large foundation models such as contrastive language–image pretraining (CLIP) models. I will introduce a new learning framework, DRRHO risk minimization, which leverages open-weight models to accelerate the training of target models on custom datasets, and demonstrate its application to CLIP. By formulating the problem as a new class of finite-sum coupled compositional optimization, I will discuss how to design efficient algorithms with provable convergence guarantees. Finally, I will highlight the broader applications of these techniques across machine learning and AI.

Bio:

Tianbao Yang is a Professor and Herbert H. Richardson Faculty Fellow at CSE department of Texas A&M University, where he directs the lab of Optimization for Machine learning and AI (OptMAI Lab). His research interests center around optimization, machine learning and AI with applications in  trustworthy AI and medicine. Before joining TAMU, he was an assistant professor and then tenured Dean's Excellence associate professor at the Computer Science Department of the University of Iowa from 2014 to 2022. Before that, he worked in Silicon Valley as Machine Learning Researcher for two years at GE Research and NEC Labs. He received the Best Student Paper Award of COLT in 2012, and the NSF Career Award in 2019. He is the founder of the widely used LibAUC library. He is associate editor of multiple journals, including IEEE Transactions on Pattern Analysis and Machine Intelligence.