Distributionally Robust Optimization and Robust Statistics
arxiv(2024)
摘要
We review distributionally robust optimization (DRO), a principled approach
for constructing statistical estimators that hedge against the impact of
deviations in the expected loss between the training and deployment
environments. Many well-known estimators in statistics and machine learning
(e.g. AdaBoost, LASSO, ridge regression, dropout training, etc.) are
distributionally robust in a precise sense. We hope that by discussing the DRO
interpretation of well-known estimators, statisticians who may not be too
familiar with DRO may find a way to access the DRO literature through the
bridge between classical results and their DRO equivalent formulation. On the
other hand, the topic of robustness in statistics has a rich tradition
associated with removing the impact of contamination. Thus, another objective
of this paper is to clarify the difference between DRO and classical
statistical robustness. As we will see, these are two fundamentally different
philosophies leading to completely different types of estimators. In DRO, the
statistician hedges against an environment shift that occurs after the decision
is made; thus DRO estimators tend to be pessimistic in an adversarial setting,
leading to a min-max type formulation. In classical robust statistics, the
statistician seeks to correct contamination that occurred before a decision is
made; thus robust statistical estimators tend to be optimistic leading to a
min-min type formulation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要