Robust empirical optimization is almost the same as mean–variance optimization

この論文をさがす

説明

Abstract We formulate a distributionally robust optimization problem where the deviation of the alternative distribution is controlled by a ϕ -divergence penalty in the objective, and show that a large class of these problems are essentially equivalent to a mean–variance problem. We also show that while a “small amount of robustness” always reduces the in-sample expected reward, the reduction in the variance, which is a measure of sensitivity to model misspecification, is an order of magnitude larger.

収録刊行物

被引用文献 (2)*注記

もっと見る

参考文献 (17)*注記

もっと見る

関連プロジェクト

もっと見る

問題の指摘

ページトップへ