Analyzing R-Learner and DR-Learner with Orthogonal Statistical Learning
Abstract
This is a study note and possible simplification of the error analysis of the R-learner Nie and Wager, 2021 and the DR-learner Kennedy, 2023, using the Orthogonal Statistical Learning (OSL) framework of Foster and Syrgkanis,2020. Unlike the original OSL work, I provide a doubly robust error bound for the DR-learner. Furthermore, I show that it is sufficient to assume only the first-order optimality condition and the strong convexity of the risk function in order to establish that the CATE parameter exhibits quadratic or mixed-bias dependence on the nuisance estimation error.
Citation
If you use or refer to this note, please cite as:
Yonghan Jung, A Short Note on Empirical Excess Risk for Orthogonal Losses, 2025.
You can read the full technical note with all derivations and proofs here: