My review of this interesting paper.
I just posted a revision to the manuscript, dated July 26th 2019, that incorporates some (but not all) of the comments from @JesseClifton, @HarryCrane, and @RichardGill. These are very interesting questions raised and I just don't think I have anything new to add at this time, beyond what I wrote in my response letter on March 25th 2019. Someday, maybe I'll understand things better and can add something meaningul, or maybe a follow-up paper is needed. But thanks again for the helpful feedback, I really appreciate it.
I think that the most important principle of statistical inference is Lucien LeCam's Principle 0: never trust any principles 100% (or something like that). We have to remain able to be surprised and to completely rethink our models. Any standard philosophical framework for statistical inference fails because of principle 0. And Principle 0 is responsible for major miscarriages of justice, scientific scandals, and everything... We need to bring personal moral responsibility back as a basic principle of statistical inference.
More technically, any of the existing frameworks is a "model" and though many models are useful, none of them are actually "true". The question is whether or not they are adequate for purpose. The role of statistics in science is an important role in a many party game. Bayes theory asks: what should I believe? Hypothesis testing puts us in a two person game. Not very interesting except as a very, very rough approximation. I am pretty sure that it is impossible to come up with a compelling multi-party framework. The situation is already bad enough in the already formalised context of a court-room. There are always many more than two parties even if legal theory sometimes likes to pretend there is.
Thanks for the feedback. My response to Clifton's 02/13/2019 comments and Crane's 03/12/2019 comments is in the attached PDF file.
See attached for comments.
(Attached PDF of comments)
Statistics has made tremendous advances since the times of Fisher, Neyman, Jeffreys, and others, but the fundamental and practically relevant questions about probability and inference that puzzled our founding fathers remain unanswered. To bridge this gap, I propose to look beyond the two dominating schools of thought and ask the following three questions: what do scientists need out of statistics, do the existing frameworks meet these needs, and, if not, how to fill the void? To the first question, I contend that scientists seek to convert their data, posited statistical model, etc., into calibrated degrees of belief about quantities of interest. To the second question, I argue that any framework that returns additive beliefs, i.e., probabilities, necessarily suffers from false confidence---certain false hypotheses tend to be assigned high probability---and, therefore, risks systematic bias. This reveals the fundamental importance of non-additive beliefs in the context of statistical inference. But non-additivity alone is not enough so, to the third question, I offer a sufficient condition, called validity, for avoiding false confidence, and present a framework, based on random sets and belief functions, that provably meets this condition. Finally, I discuss characterizations of p-values and confidence intervals in terms of valid non-additive beliefs, which imply that users of these classical procedures are already following the proposed framework without knowing it.
➤ Version 1 (2019-02-03)
Ryan Martin (2019). False confidence, non-additive beliefs, and valid statistical inference. Researchers.One, https://researchers.one/articles/false-confidence-non-additive-beliefs-and-valid-statistical-inference/5f52699c36a3e45f17ae7db2/v1.