In prediction problems, it is common to model the data-generating process and then use a model-based procedure, such as a Bayesian predictive distribution, to quantify uncertainty about the next observation. However, if the posited model is misspecified, then its predictions may not be calibrated---that is, the predictive distribution's quantiles may not be nominal frequentist prediction upper limits, even asymptotically. Rather than abandoning the comfort of a model-based formulation for a more complicated non-model-based approach, here we propose a strategy in which the data itself helps determine if the assumed model-based solution should be adjusted to account for model misspecification. This is achieved through a generalized Bayes formulation where a learning rate parameter is tuned, via the proposed generalized predictive calibration (GPrC) algorithm, to make the predictive distribution calibrated, even under model misspecification. Extensive numerical experiments are presented, under a variety of settings, demonstrating the proposed GPrC algorithm's validity, efficiency, and robustness.
➤ Version 1 (2021-07-06) |
Candace Wu and Ryan Martin (2021). Calibrating generalized predictive distributions. Researchers.One. https://researchers.one/articles/21.07.00001v1
Ryan MartinJuly 7th, 2021 at 09:22 pm
Previous comment was from me, I forgot I was logged in under a different account.
ISIPTA 2021 OrganizersJuly 7th, 2021 at 09:18 pm
Briefly, this paper considers prediction under a posited parametric statistical model and develops a relatively simple strategy for adjusting the corresponding predictive distribution so that its predictions will be (approximately) calibrated even if the model is misspecified. *Comments welcome!*
© 2018–2025 Researchers.One