Submitted on 2025-07-23
An inferential model (IM) is a model describing the construction of provably reliable, data-driven uncertainty quantification and inference about relevant unknowns. IMs and Fisher's fiducial argument have similar objectives, but a fundamental distinction between the two is that the former doesn't require that uncertainty quantification be probabilistic, offering greater flexibility and allowing for a proof of its reliability. Important recent developments have been made thanks in part to newfound connections with the imprecise probability literature, in particular, possibility theory. The brand of possibilistic IMs studied here are straightforward to construct, have very strong frequentist-like reliability properties, and offer fully conditional, Bayesian-like (imprecise) probabilistic reasoning. This paper reviews these key recent developments, describing the new theory, methods, and computational tools. A generalization of the basic possibilistic IM is also presented, making new and unexpected connections with ideas in modern statistics and machine learning, e.g., bootstrap and conformal prediction.
Submitted on 2025-02-08
Inferential models (IMs) offer prior-free, Bayesian-like posterior degrees of belief designed for statistical inference, which feature a frequentist-like calibration property that ensures reliability of said inferences. The catch is that IMs' degrees of belief are possibilistic rather than probabilistic and, since the familiar Monte Carlo methods approximate probabilistic quantities, there are computational challenges associated with putting this framework into practice. The present paper addresses these challenges by developing a new Monte Carlo method designed specifically to approximate the IM's possibilistic output. The proposal is based on a characterization of the possibilistic IM's credal set, which identifies the "best probabilistic approximation" of the IM as a mixture distribution that can be readily approximated and sampled from. These samples can then be transformed into an approximation of the possibilistic IM. Numerical results are presented highlighting the proposed approximation's accuracy and computational efficiency.
Submitted on 2024-09-29
Classical statistical methods have theoretical justification when the sample size is predetermined. In applications, however, it's often the case that sample sizes aren't predetermined; instead, they're often data-dependent. Since those methods designed for static sample sizes aren't reliable when sample sizes are dynamic, there's been recent interest in e-processes and corresponding tests and confidence sets that are anytime valid in the sense that their justification holds up for arbitrary dynamic data-collection plans. But if the investigator has relevant-yet-incomplete prior information about the quantity of interest, then there's an opportunity for efficiency gain, but existing approaches can't accommodate this. The present paper offer a new, regularized e-process framework that features a knowledge-based, imprecise-probabilistic regularization with improved efficiency. A generalized version of Ville's inequality is established, ensuring that inference based on the regularized e-process remains anytime valid in a novel, knowledge-dependent sense. In addition, the proposed regularized e-processes facilitate possibility-theoretic uncertainty quantification with strong frequentist-like calibration properties and other desirable Bayesian-like features: satisfies the likelihood principle, avoids sure-loss, and offers formal decision-making with reliability guarantees.
Submitted on 2024-04-30
Inferential models (IMs) offer provably reliable, data-driven, possibilistic statistical inference. But despite IMs' theoretical and foundational advantages, efficient computation is often a challenge. This paper presents a simple and powerful numerical strategy for approximating the IM's possibility contour, or at least its alpha-cut for a specified alpha. Our proposal starts with the specification a parametric family that, in a certain sense, approximately covers the credal set associated with the IM's possibility measure. Then the parameters of that parametric family are tuned in such a way that the family's 100(1-alpha)% credible set roughly matches the IM contour's alpha-cut. This is reminiscent of the variational approximations now widely used in Bayesian statistics, hence the name variational-like IM approximation.
Submitted on 2024-04-24
The false confidence theorem establishes that, for any data-driven, precise-probabilistic method for uncertainty quantification, there exists (both trivial and non-trivial) false hypotheses to which the method tends to assign high confidence. This raises concerns about the reliability of these widely-used methods, and shines promising light on the consonant belief function-based methods that are provably immune to false confidence. But an existence result alone leaves much to be desired. Towards an answer to the title question, I show that, roughly, complements of convex hypotheses are afflicted by false confidence.
The inferential model (IM) framework offers alternatives to the familiar probabilistic (e.g., Bayesian and fiducial) uncertainty quantification in statistical inference. Allowing uncertainty quantification to be imprecise makes exact validity/reliability possible. But is imprecision and exact validity compatible with attainment of statistical efficiency? This paper gives an affirmative answer to this question via a new possibilistic Bernstein--von Mises theorem that parallels a fundamental result in Bayesian inference. Among other things, our result demonstrates that the IM solution is asymptotically efficient in the sense that, asymptotically, its credal set is the smallest that contains the Gaussian distribution with variance equal to the Cramer--Rao lower bound.
Submitted on 2024-02-03
A common goal in statistics and machine learning is estimation of unknowns. Point estimates alone are of little value without an accompanying measure of uncertainty, but traditional uncertainty quantification methods, such as confidence sets and p-values, often require strong distributional or structural assumptions that may not be justified in modern problems. The present paper considers a very common case in machine learning, where the quantity of interest is the minimizer of a given risk (expected loss) function. For such cases, we propose a generalized universal procedure for inference on risk minimizers that features a finite-sample, frequentist validity property under mild distributional assumptions. One version of the proposed procedure is shown to be anytime-valid in the sense that it maintains validity properties regardless of the stopping rule used for the data collection process. We show how this anytime-validity property offers protection against certain factors contributing to the replication crisis in science.
Submitted on 2023-12-09
That science and other domains are now largely data-driven means virtually unlimited opportunities for statisticians. With great power comes responsibility, so it's imperative that statisticians ensure that the methods being developing to solve these problems are reliable. But reliable in what sense? This question is problematic because different notions of reliability correspond to distinct statistical schools of thought, each with their own philosophy and methodology, often giving different answers in applications. To achieve the goal of reliably solving modern problems, I argue that a balance in the behavioral--statistical priorities is needed. Towards this, I make use of Fisher's "underworld of probability" to motivate a new property called invulnerability that, roughly, requires the statistician to avoid the risk of losing money in a long-run sense. Then I go on to make connections between invulnerability and the more familiar behaviorally- and statistically-motivated notions, namely coherence and (frequentist-style) validity.
Submitted on 2023-09-23
As Basu (1977) writes, "Eliminating nuisance parameters from a model is universally recognized as a major problem of statistics," but after more than 50 years since Basu wrote these words, the two mainstream schools of thought in statistics have yet to solve the problem. Fortunately, the two mainstream frameworks aren't the only options. This series of papers rigorously develops a new and very general inferential model (IM) framework for imprecise-probabilistic statistical inference that is provably valid and efficient, while simultaneously accommodating incomplete or partial prior information about the relevant unknowns when it's available. The present paper, Part III in the series, tackles the marginal inference problem. Part II showed that, for parametric models, the likelihood function naturally plays a central role and, here, when nuisance parameters are present, the same principles suggest that the profile likelihood is the key player. When the likelihood factors nicely, so that the interest and nuisance parameters are perfectly separated, the valid and efficient profile-based marginal IM solution is immediate. But even when the likelihood doesn't factor nicely, the same profile-based solution remains valid and leads to efficiency gains. This is demonstrated in several examples, including the famous Behrens--Fisher and gamma mean problems, where I claim the proposed IM solution is the best solution available. Remarkably, the same profiling-based construction offers validity guarantees in the prediction and non-parametric inference problems. Finally, I show how a broader view of this new IM construction can handle non-parametric inference on risk minimizers and makes a connection between non-parametric IMs and conformal prediction.
Distinguishing two classes of candidate models is a fundamental and practically important problem in statistical inference. Error rate control is crucial to the logic but, in complex nonparametric settings, such guarantees can be difficult to achieve, especially when the stopping rule that determines the data collection process is not available. In this paper we develop a novel e-process construction that leverages the so-called predictive recursion (PR) algorithm designed to rapidly and recursively fit nonparametric mixture models. The resulting PRe-process affords anytime valid inference uniformly over stopping rules and is shown to be efficient in the sense that it achieves the maximal growth rate under the alternative relative to the mixture model being fit by PR. In the special case of testing for a log-concave density, the PRe-process test is computationally simpler and faster, more stable, and no less efficient compared to a recently proposed anytime valid test.
Submitted on 2023-04-12
Statisticians are largely focused on developing methods that perform well in a frequentist sense---even the Bayesians. But the widely-publicized replication crisis suggests that these performance guarantees alone are not enough to instill confidence in scientific discoveries. In addition to reliably detecting hypotheses that are (in)compatible with data, investigators require methods that can probe for hypotheses that are actually supported by the data. In this paper, we demonstrate that valid inferential models (IMs) achieve both performance and probativeness properties and we offer a powerful new result that ensures the IM's probing is reliable. We also compare and contrast the IM's dual performance and probativeness abilities with that of Deborah Mayo's severe testing framework.
Submitted on 2023-03-30
Basu's via media is what he referred to as the middle road between the Bayesian and frequentist poles. He seemed skeptical that a suitable via media could be found, but I disagree. My basic claim is that the likelihood alone can't reliably support probabilistic inference, and I justify this by considering a technical trap that Basu stepped in concerning interpretation of the likelihood. While reliable probabilistic inference is out of reach, it turns out that reliable possibilistic inference is not. I lay out my proposed possibility-theoretic solution to Basu's via media and I investigate how the flexibility afforded by my imprecise-probabilistic solution can be leveraged to achieve the likelihood principle (or something close to it).
Submitted on 2023-03-15
Fisher's fiducial argument is widely viewed as a failed version of Neyman's theory of confidence limits. But Fisher's goal---Bayesian-like probabilistic uncertainty quantification without priors---was more ambitious than Neyman's, and it's not out of reach. I've recently shown that reliable, prior-free probabilistic uncertainty quantification must be grounded in the theory of imprecise probability, and I've put forward a possibility-theoretic solution that achieves it. This has been met with resistance, however, in part due to statisticians' singular focus on confidence limits. Indeed, if imprecision isn't needed to perform confidence-limit-related tasks, then what's the point? In this paper, for a class of practically useful models, I explain specifically why the fiducial argument gives valid confidence limits, i.e., it's the "best probabilistic approximation" of the possibilistic solution I recently advanced. This sheds new light on what the fiducial argument is doing and on what's lost in terms of reliability when imprecision is ignored and the fiducial argument is pushed for more than just confidence limits.
Submitted on 2022-12-01
A fundamental aspect of statistics is the integration of data from different sources. Classically, Fisher and others were focused on how to integrate homogeneous (or only mildly heterogeneous) sets of data. More recently, as data are becoming more accessible, the question of if data sets from different sources should be integrated is becoming more relevant. The current literature treats this as a question with only two answers: integrate or don't. Here we take a different approach, motivated by information-sharing principles coming from the shrinkage estimation literature. In particular, we deviate from the do/don't perspective and propose a dial parameter that controls the extent to which two data sources are integrated. How far this dial parameter should be turned is shown to depend, for example, on the informativeness of the different data sources as measured by Fisher information. In the context of generalized linear models, this more nuanced data integration framework leads to relatively simple parameter estimates and valid tests/confidence intervals. Moreover, we demonstrate both theoretically and empirically that setting the dial parameter according to our recommendation leads to more efficient estimation compared to other binary data integration schemes.
Submitted on 2022-11-23
Bayesian inference requires specification of a single, precise prior distribution, whereas frequentist inference only accommodates a vacuous prior. Since virtually every real-world application falls somewhere in between these two extremes, a new approach is needed. This series of papers develops a new framework that provides valid and efficient statistical inference, prediction, etc., while accommodating partial prior information and imprecisely-specified models more generally. This paper fleshes out a general inferential model construction that not only yields tests, confidence intervals, etc.~with desirable error rate control guarantees, but also facilitates valid probabilistic reasoning with de~Finetti-style no-sure-loss guarantees. The key technical novelty here is a so-called outer consonant approximation of a general imprecise probability which returns a data- and partial prior-dependent possibility measure to be used for inference and prediction. Despite some potentially unfamiliar imprecise-probabilistic concepts in the development, the result is an intuitive, likelihood-driven framework that will, as expected, agree with the familiar Bayesian and frequentist solutions in the respective extreme cases. More importantly, the proposed framework accommodates partial prior information where available and, therefore, leads to new solutions that were previously out of reach for both Bayesians and frequentists. Details are presented here for a wide range of examples, with more practical details to come in later installments.
Submitted on 2022-08-25
Inference on the minimum clinically important difference, or MCID, is an important practical problem in medicine. The basic idea is that a treatment being statistically significant may not lead to an improvement in the patients' well-being. The MCID is defined as a threshold such that, if a diagnostic measure exceeds this threshold, then the patients are more likely to notice an improvement. Typical formulations use an underspecified model, which makes a genuine Bayesian solution out of reach. Here, for a challenging personalized MCID problem, where the practically-significant threshold depends on patients' profiles, we develop a novel generalized posterior distribution, based on a working binary quantile regression model, that can be used for estimation and inference. The advantage of this formulation is two-fold: we can theoretically control the bias of the misspecified model and it has a latent variable representation which we can leverage for efficient Gibbs sampling. To ensure that the generalized Bayes inferences achieve a level of frequentist reliability, we propose a variation on the so-called generalized posterior calibration algorithm to suitably tune the spread of our proposed posterior.
Submitted on 2022-05-13
This paper considers statistical inference in contexts where only incomplete prior information is available. We develop a practical construction of a suitably valid inferential model (IM) that (a) takes the form of a possibility measure, and (b) depends mainly on the likelihood and partial prior. We also propose a general computational algorithm through which the proposed IM can be evaluated in applications.
Submitted on 2021-12-25
Inferential models (IMs) are data-dependent, probability-like structures designed to quantify uncertainty about unknowns. As the name suggests, the focus has been on uncertainty quantification for inference, and on establishing a validity property that ensures the IM is reliable in a specific sense. The present paper develops an IM framework for decision problems and, in particular, investigates the decision-theoretic implications of the aforementioned validity property. I show that a valid IM's assessment of an action's quality, defined by a Choquet integral, will not be too optimistic compared to that of an oracle. This ensures that a valid IM tends not to favor actions that the oracle doesn't also favor, hence a valid IM is reliable for decision-making too. In a certain special class of structured statistical models, further connections can be made between the valid IM's favored actions and those favored by other more familiar frameworks, from which certain optimality conclusions can be drawn. An important step in these decision-theoretic developments is a characterization of the valid IM's credal set in terms of confidence distributions, which may be of independent interest.
Submitted on 2021-12-19
Existing frameworks for probabilistic inference assume the quantity of interest is the parameter of a posited statistical model. In machine learning applications, however, often there is no statistical model/parameter; the quantity of interest is a statistical functional, a feature of the underlying distribution. Model-based methods can only handle such problems indirectly, via marginalization from a model parameter to the real quantity of interest. Here we develop a generalized inferential model (IM) framework for direct probabilistic uncertainty quantification on the quantity of interest. In particular, we construct a data-dependent, bootstrap-based possibility measure for uncertainty quantification and inference. We then prove that this new approach provides approximately valid inference in the sense that the plausibility values assigned to hypotheses about the unknowns are asymptotically well-calibrated in a frequentist sense. Among other things, this implies that confidence regions for the underlying functional derived from our proposed IM are approximately valid. The method is shown to perform well in key examples, including quantile regression, and in a personalized medicine application.
Submitted on 2021-12-07
Prediction, where observed data is used to quantify uncertainty about a future observation, is a fundamental problem in statistics. Prediction sets with coverage probability guarantees are a common solution, but these do not provide probabilistic uncertainty quantification in the sense of assigning beliefs to relevant assertions about the future observable. Alternatively, we recommend the use of a probabilistic predictor, a data-dependent (imprecise) probability distribution for the to-be-predicted observation given the observed data. It is essential that the probabilistic predictor be reliable or valid, and here we offer a notion of validity and explore its behavioral and statistical implications. In particular, we show that valid probabilistic predictors avoid sure loss and lead to prediction procedures with desirable frequentist error rate control properties. We also provide a general inferential model construction that yields a provably valid probabilistic predictor, and we illustrate this construction in regression and classification applications.
Submitted on 2021-09-14
In mathematical finance, Levy processes are widely used for their ability to model both continuous variation and abrupt, discontinuous jumps. These jumps are practically relevant, so reliable inference on the feature that controls jump frequencies and magnitudes, namely, the Levy density, is of critical importance. A specific obstacle to carrying out model-based (e.g., Bayesian) inference in such problems is that, for general Levy processes, the likelihood is intractable. To overcome this obstacle, here we adopt a Gibbs posterior framework that updates a prior distribution using a suitable loss function instead of a likelihood. We establish asymptotic posterior concentration rates for the proposed Gibbs posterior. In particular, in the most interesting and practically relevant case, we give conditions under which the Gibbs posterior concentrates at (nearly) the minimax optimal rate, adaptive to the unknown smoothness of the true Levy density.
Submitted on 2021-07-06
In prediction problems, it is common to model the data-generating process and then use a model-based procedure, such as a Bayesian predictive distribution, to quantify uncertainty about the next observation. However, if the posited model is misspecified, then its predictions may not be calibrated---that is, the predictive distribution's quantiles may not be nominal frequentist prediction upper limits, even asymptotically. Rather than abandoning the comfort of a model-based formulation for a more complicated non-model-based approach, here we propose a strategy in which the data itself helps determine if the assumed model-based solution should be adjusted to account for model misspecification. This is achieved through a generalized Bayes formulation where a learning rate parameter is tuned, via the proposed generalized predictive calibration (GPrC) algorithm, to make the predictive distribution calibrated, even under model misspecification. Extensive numerical experiments are presented, under a variety of settings, demonstrating the proposed GPrC algorithm's validity, efficiency, and robustness.
Submitted on 2021-05-04
Between Bayesian and frequentist inference, it's commonly believed that the former is for cases where one has a prior and the latter is for cases where one has no prior. But the prior/no-prior classification isn't exhaustive, and most real-world applications fit somewhere in between these two extremes. That neither of the two dominant schools of thought are suited for these applications creates confusion and slows progress. A key observation here is that ``no prior information'' actually means no prior distribution can be ruled out, so the classically-frequentist context is best characterized as every prior. From this perspective, it's now clear that there's an entire spectrum of contexts depending on what, if any, partial prior information is available, with Bayesian (one prior) and frequentist (every prior) on opposite extremes. This paper ties the two frameworks together by formally treating those cases where only partial prior information is available using the theory of imprecise probability. The end result is a unified framework of (imprecise-probabilistic) statistical inference with a new validity condition that implies both frequentist-style error rate control for derived procedures and Bayesian-style coherence properties, relative to the given partial prior information. This new theory contains both the Bayesian and frequentist frameworks as special cases, since they're both valid in this new sense relative to their respective partial priors. Different constructions of these valid inferential models are considered, and compared based on their efficiency.
This research note describes the novel application of Zero-Knowledge Proofs to conduct data analysis. A zero-knowledge proof is useful to prove you can generate a particular result without revealing certain parts of the process. Using such a proof, in the setting of a crowdsourced dataset testing Seth Roberts’ Appetite Theory, we were able to conduct an independent data analysis of unshared data. The analysis queried the data set to generate both a numerical result and a computational proof which illuminated the dataset without revealing it. We suggest that this protocol could be useful for solving the “other” file drawer problem, where researchers naturally seek to horde and protect their private data until they have extracted maximal value from it. And we further suggest and outline an application in citizen science.
Submitted on 2021-04-19
The article accompanies the first ever minted "alpha glyph", Alpha Glyph 1 - Independent Analysis of Seth Roberts Appetite Theory Study (Crane and Martin, 2021). https://zora.co/0xa18edFea684792B10223d68e2920467b7Dd6FE8b/2837. Alpha Glyph 1 was created from our independent analysis of Matt Stephenson’s first open source, scientific randomized control trial of Seth Roberts’s Appetite Theory.
Proceeds from the initial auction of Alpha Glyph 1 will be put toward funding both the replication and the eventual publication of the results of Stephenson’s study. Following Stephenson's incentive protocol, 50% of the proceeds from this auction go to those key contributors on whose work this builds.
Submitted on 2021-01-10
Between the two dominant schools of thought in statistics, namely, Bayesian and classical/frequentist, a main difference is that the former is grounded in the mathematically rigorous theory of probability while the latter is not. In this paper, I show that the latter is grounded in a different but equally mathematically rigorous theory of imprecise probability. Specifically, I show that for every suitable testing or confidence procedure with error rate control guarantees, there exists a consonant plausibility function whose derived testing or confidence procedure is no less efficient. Beyond its foundational implications, this characterization has at least two important practical consequences: first, it simplifies the interpretation of p-values and confidence regions, thus creating opportunities for improved education and scientific communication; second, the constructive proof of the main results leads to a strategy for new and improved methods in challenging inference problems.
Submitted on 2021-01-07
For high-dimensional inference problems, statisticians have a number of competing interests. On the one hand, procedures should provide accurate estimation, reliable structure learning, and valid uncertainty quantification. On the other hand, procedures should be computationally efficient and able to scale to very high dimensions. In this note, I show that a very simple data-dependent measure can achieve all of these desirable properties simultaneously, along with some robustness to the error distribution, in sparse sequence models.
Submitted on 2020-10-20
Mixture models are commonly used when data show signs of heterogeneity and, often, it is important to estimate the distribution of the latent variable responsible for that heterogeneity. This is a common problem for data taking values in a Euclidean space, but the work on mixing distribution estimation based on directional data taking values on the unit sphere is limited. In this paper, we propose using the predictive recursion (PR) algorithm to solve for a mixture on a sphere. One key feature of PR is its computational efficiency. Moreover, compared to likelihood-based methods that only support finite mixing distribution estimates, PR is able to estimate a smooth mixing density. PR's asymptotic consistency in spherical mixture models is established, and simulation results showcase its benefits compared to existing likelihood-based methods. We also show two real-data examples to illustrate how PR can be used for goodness-of-fit testing and clustering.
Submitted on 2020-08-14
C. Cunen, N. Hjort, and T. Schweder published a comment on our paper, Satellite conjunction analysis and the false confidence theorem. Here is our response to their comment.
Submitted on 2020-08-14
The inferential model (IM) framework produces data-dependent, non-additive degrees of belief about the unknown parameter that are provably valid. The validity property guarantees, among other things, that inference procedures derived from the IM control frequentist error rates at the nominal level. A technical complication is that IMs are built on a relatively unfamiliar theory of random sets. Here we develop an alternative---and practically equivalent---formulation, based on a theory of possibility measures, which is simpler in many respects. This new perspective also sheds light on the relationship between IMs and Fisher's fiducial inference, as well as on the construction of optimal IMs.
Predicting the response at an unobserved location is a fundamental problem in spatial statistics. Given the difficulty in modeling spatial dependence, especially in non-stationary cases, model-based prediction intervals are at risk of misspecification bias that can negatively affect their validity. Here we present a new approach for model-free nonparametric spatial prediction based on the conformal prediction machinery. Our key observation is that spatial data can be treated as exactly or approximately exchangeable in a wide range of settings. In particular, under an infill asymptotic regime, we prove that the response values are, in a certain sense, locally approximately exchangeable for a broad class of spatial processes, and we develop a local spatial conformal prediction algorithm that yields valid prediction intervals without strong model assumptions like stationarity. Numerical examples with both real and simulated data confirm that the proposed conformal prediction intervals are valid and generally more efficient than existing model-based procedures for large datasets across a range of non-stationary and non-Gaussian settings.
Valid prediction of future observations is an important and challenging problem. The two mainstream approaches for quantifying prediction uncertainty use prediction regions and predictive distribution, respectively, with the latter believed to be more informative because it can perform other prediction-related tasks. The standard notion of validity, what we refer to here as Type~1 validity, focuses on coverage probability bounds for prediction regions, while a notion of validity relevant to the other prediction-related tasks performed by predictive distributions is lacking. Here we present a new notion, called Type-2 validity, relevant to these other prediction tasks. We establish connections between Type-2 validity and coherence properties, and argue that imprecise probability considerations are required in order to achieve it. We go on to show that both types of prediction validity can be achieved by interpreting the conformal prediction output as the contour function of a consonant plausibility measure. We also offer an alternative characterization of conformal prediction, based on a new nonparametric inferential model construction, wherein the appearance of consonance is more natural, and prove its validity.
Submitted on 2019-11-29
Inferential challenges that arise when data are censored have been extensively studied under the classical frameworks. In this paper, we provide an alternative generalized inferential model approach whose output is a data-dependent plausibility function. This construction is driven by an association between the distribution of the relative likelihood function at the interest parameter and an unobserved auxiliary variable. The plausibility function emerges from the distribution of a suitably calibrated random set designed to predict that unobserved auxiliary variable. The evaluation of this plausibility function requires a novel use of the classical Kaplan--Meier estimator to estimate the censoring rather than the event distribution. We prove that the proposed method provides valid inference, at least approximately, and our real- and simulated-data examples demonstrate its superior performance compared to existing methods.
Submitted on 2019-10-16
Bias resulting from model misspecification is a concern when predicting insurance claims. Indeed, this bias puts the insurer at risk of making invalid or unreliable predictions. A method that could provide provably valid predictions uniformly across a large class of possible distributions would effectively eliminate the risk of model misspecification bias. Conformal prediction is one such method that can meet this need, and here we tailor that approach to the typical insurance application and show that the predictions are not only valid but also efficient across a wide range of settings.
Submitted on 2019-09-30
Whether the predictions put forth prior to the 2016 U.S. presidential election were right or wrong is a question that led to much debate. But rather than focusing on right or wrong, we analyze the 2016 predictions with respect to a core set of {\em effectiveness principles}, and conclude that they were ineffective in conveying the uncertainty behind their assessments. Along the way, we extract key insights that will help to avoid, in future elections, the systematic errors that lead to overly precise and overconfident predictions in 2016. Specifically, we highlight shortcomings of the classical interpretations of probability and its communication in the form of predictions, and present an alternative approach with two important features. First, our recommended predictions are safer in that they come with certain guarantees on the probability of an erroneous prediction; second, our approach easily and naturally reflects the (possibly substantial) uncertainty about the model by outputting plausibilities instead of probabilities.
Submitted on 2019-09-30
Meta-analysis based on only a few studies remains a challenging problem, as an accurate estimate of the between-study variance is apparently needed, but hard to attain, within this setting. Here we offer a new approach, based on the generalized inferential model framework, whose success lays in marginalizing out the between-study variance, so that an accurate estimate is not essential. We show theoretically that the proposed solution is at least approximately valid, with numerical results suggesting it is, in fact, nearly exact. We also demonstrate that the proposed solution outperforms existing methods across a wide range of scenarios.
Submitted on 2019-07-19
In the context of predicting future claims, a fully Bayesian analysis---one that specifies a statistical model, prior distribution, and updates using Bayes's formula---is often viewed as the gold-standard, while Buhlmann's credibility estimator serves as a simple approximation. But those desirable properties that give the Bayesian solution its elevated status depend critically on the posited model being correctly specified. Here we investigate the asymptotic behavior of Bayesian posterior distributions under a misspecified model, and our conclusion is that misspecification bias generally has damaging effects that can lead to inaccurate inference and prediction. The credibility estimator, on the other hand, is not sensitive at all to model misspecification, giving it an advantage over the Bayesian solution in those practically relevant cases where the model is uncertain. This begs the question: does robustness to model misspecification require that we abandon uncertainty quantification based on a posterior distribution? Our answer to this question is No, and we offer an alternative Gibbs posterior construction. Furthermore, we argue that this Gibbs perspective provides a new characterization of Buhlmann's credibility estimator.
An inferential model encodes the data analyst's degrees of belief about an unknown quantity of interest based on the observed data, posited statistical model, etc. Inferences drawn based on these degrees of belief should be reliable in a certain sense, so we require the inferential model to be valid. The construction of valid inferential models based on individual pieces of data is relatively straightforward, but how to combine these so that the validity property is preserved? In this paper we analyze some common combination rules with respect to this question, and we conclude that the best strategy currently available is one that combines via a certain dimension reduction step before the inferential model construction.
Submitted on 2019-03-03
In this paper we adopt the familiar sparse, high-dimensional linear regression model and focus on the important but often overlooked task of prediction. In particular, we consider a new empirical Bayes framework that incorporates data in the prior in two ways: one is to center the prior for the non-zero regression coefficients and the other is to provide some additional regularization. We show that, in certain settings, the asymptotic concentration of the proposed empirical Bayes posterior predictive distribution is very fast, and we establish a Bernstein--von Mises theorem which ensures that the derived empirical Bayes prediction intervals achieve the targeted frequentist coverage probability. The empirical prior has a convenient conjugate form, so posterior computations are relatively simple and fast. Finally, our numerical results demonstrate the proposed method's strong finite-sample performance in terms of prediction accuracy, uncertainty quantification, and computation time compared to existing Bayesian methods.
Submitted on 2019-02-03
Statistics has made tremendous advances since the times of Fisher, Neyman, Jeffreys, and others, but the fundamental and practically relevant questions about probability and inference that puzzled our founding fathers remain unanswered. To bridge this gap, I propose to look beyond the two dominating schools of thought and ask the following three questions: what do scientists need out of statistics, do the existing frameworks meet these needs, and, if not, how to fill the void? To the first question, I contend that scientists seek to convert their data, posited statistical model, etc., into calibrated degrees of belief about quantities of interest. To the second question, I argue that any framework that returns additive beliefs, i.e., probabilities, necessarily suffers from false confidence---certain false hypotheses tend to be assigned high probability---and, therefore, risks systematic bias. This reveals the fundamental importance of non-additive beliefs in the context of statistical inference. But non-additivity alone is not enough so, to the third question, I offer a sufficient condition, called validity, for avoiding false confidence, and present a framework, based on random sets and belief functions, that provably meets this condition. Finally, I discuss characterizations of p-values and confidence intervals in terms of valid non-additive beliefs, which imply that users of these classical procedures are already following the proposed framework without knowing it.
Submitted on 2018-12-05
Bayesian methods provide a natural means for uncertainty quantification, that is, credible sets can be easily obtained from the posterior distribution. But is this uncertainty quantification valid in the sense that the posterior credible sets attain the nominal frequentist coverage probability? This paper investigates the frequentist validity of posterior uncertainty quantification based on a class of empirical priors in the sparse normal mean model. In particular, we show that our marginal posterior credible intervals achieve the nominal frequentist coverage probability under conditions slightly weaker than needed for selection consistency and a Bernstein--von Mises theorem for the full posterior, and numerical investigations suggest that our empirical Bayes method has superior frequentist coverage probability properties compared to other fully Bayes methods.
Submitted on 2018-12-05
Nonparametric estimation of a mixing density based on observations from the corresponding mixture is a challenging statistical problem. This paper surveys the literature on a fast, recursive estimator based on the predictive recursion algorithm. After introducing the algorithm and giving a few examples, I summarize the available asymptotic convergence theory, describe an important semiparametric extension, and highlight two interesting applications. I conclude with a discussion of several recent developments in this area and some open problems.
This article describes how the filtering role played by peer review may actually be harmful rather than helpful to the quality of the scientific literature. We argue that, instead of trying to filter out the low-quality research, as is done by traditional journals, a better strategy is to let everything through but with an acknowledgment of the uncertain quality of what is published, as is done on the RESEARCHERS.ONE platform. We refer to this as "scholarly mithridatism." When researchers approach what they read with doubt rather than blind trust, they are more likely to identify errors, which protects the scientific community from the dangerous effects of error propagation, making the literature stronger rather than more fragile.
Submitted on 2018-09-04
In a Bayesian context, prior specification for inference on monotone densities is conceptually straightforward, but proving posterior convergence theorems is complicated by the fact that desirable prior concentration properties often are not satisfied. In this paper, I first develop a new prior designed specifically to satisfy an empirical version of the prior concentration property, and then I give sufficient conditions on the prior inputs such that the corresponding empirical Bayes posterior concentrates around the true monotone density at nearly the optimal minimax rate. Numerical illustrations also reveal the practical benefits of the proposed empirical Bayes approach compared to Dirichlet process mixtures.
Submitted on 2018-09-04
Accurate estimation of value-at-risk (VaR) and assessment of associated uncertainty is crucial for both insurers and regulators, particularly in Europe. Existing approaches link data and VaR indirectly by first linking data to the parameter of a probability model, and then expressing VaR as a function of that parameter. This indirect approach exposes the insurer to model misspecification bias or estimation inefficiency, depending on whether the parameter is finite- or infinite-dimensional. In this paper, we link data and VaR directly via what we call a discrepancy function, and this leads naturally to a Gibbs posterior distribution for VaR that does not suffer from the aforementioned biases and inefficiencies. Asymptotic consistency and root-n concentration rate of the Gibbs posterior are established, and simulations highlight its superior finite-sample performance compared to other approaches.
Submitted on 2018-08-31
Inference on parameters within a given model is familiar, as is ranking different models for the purpose of selection. Less familiar, however, is the quantification of uncertainty about the models themselves. A Bayesian approach provides a posterior distribution for the model but it comes with no validity guarantees, and, therefore, is only suited for ranking and selection. In this paper, I will present an alternative way to view this model uncertainty problem, through the lens of a valid inferential model based on random sets and non-additive beliefs. Specifically, I will show that valid uncertainty quantification about a model is attainable within this framework in general, and highlight the benefits in a classical signal detection problem.
Publication of scientific research all but requires a supporting statistical analysis, anointing statisticians the de facto gatekeepers of modern scientific discovery. While the potential of statistics for providing scientific insights is undeniable, there is a crisis in the scientific community due to poor statistical practice. Unfortunately, widespread calls to action have not been effective, in part because of statisticians’ tendency to make statistics appear simple. We argue that statistics can meet the needs of science only by empowering scientists to make sound judgments that account for both the nuances of the application and the inherent complexity of funda- mental effective statistical practice. In particular, we emphasize a set of statistical principles that scientists can adapt to their ever-expanding scope of problems.
© 2018–2025 Researchers.One