Inference on parameters within a given model is familiar, as is ranking different models for the purpose of selection. Less familiar, however, is the quantification of uncertainty about the models themselves. A Bayesian approach provides a posterior distribution for the model but it comes with no validity guarantees, and, therefore, is only suited for ranking and selection. In this paper, I will present an alternative way to view this model uncertainty problem, through the lens of a valid inferential model based on random sets and non-additive beliefs. Specifically, I will show that valid uncertainty quantification about a model is attainable within this framework in general, and highlight the benefits in a classical signal detection problem.
➤ Version 1 (2018-08-31)
Ryan Martin (2018). On valid uncertainty quantification about a model. Researchers.One, https://researchers.one/articles/on-valid-uncertainty-quantification-about-a-model/5f52699b36a3e45f17ae7d52/v1.