Title:
New Applications and Algorithms of Distributionally Robust Optimization in AI
Abstract:
In this talk, I will present our recent research on new applications and algorithms of Distributionally Robust Optimization (DRO) in AI, with a particular focus on training large foundation models such as contrastive language–image pretraining (CLIP) models. I will introduce a new learning framework, DRRHO risk minimization, which leverages open-weight models to accelerate the training of target models on custom datasets, and demonstrate its application to CLIP. By formulating the problem as a new class of finite-sum coupled compositional optimization, I will discuss how to design efficient algorithms with provable convergence guarantees. Finally, I will highlight the broader applications of these techniques across machine learning and AI.