Penalized maximum likelihood methods that perform automatic variable are now ubiquitous in statistical research. It is well-known, however, that these estimators are nonregular and consequently have limiting distributions that can be highly sensitive to small perturbations of the underlying generative model. This is the case even for the ï¬xed “p” framework. Hence, the usual asymptotic methods for inference, like the bootstrap and series approximations, often perform poorly in small samples and require modiï¬cation. Here, we develop locally asymptotically consistent conï¬dence intervals for regression coeï¬ƒcients when estimation is done using the Adaptive LASSO (Zou, 2006) in the ï¬xed “p” framework. We construct the conï¬dence intervals by sandwiching the nonregular functional of interest between two smooth, data-driven, upper and lower bounds and then approximating the distribution of the bounds using the bootstrap. We leverage the smoothness of the bounds to obtain consistent inference for the nonregular functional under both ï¬xed and local alternatives. The bounds are adaptive to the amount of underlying nonregularity in the sense that they deliver asymptotically exact coverage whenever the underlying generative model is such that the Adaptive LASSO estimators are consistent and asymptotically normal, and conservative otherwise. The resultant conï¬dence intervals possess a certain tightness property among all regular bounds. Although we focus on the Adaptive LASSO, our approach generalizes to other penalized methods. (Originally published as a technical report in 2014.)
➤ Version 1 (2018-09-14)
Eric Laber and Susan Murphy (2018). Adaptive inference after model selection . Researchers.One, https://researchers.one/articles/adaptive-inference-after-model-selection/5f52699b36a3e45f17ae7d76/v1.