Leading Bayesian is critical of naive Bayesian updating
Bayesian updating is a very attractive idea. We take a guess at our uncertainty about a parameter of interest by specifying a prior distribution about it; then we collect some data and update our beliefs about the parameter by combining our data with our prior to produce a posterior distribution; then we collect more data and combine our old posterior with our new data to get a new posterior; etc… If we update enough we will eventually converge on the truth, so they say. This idea feels really good, and often makes people feel as though Bayesian updating will save them from having to think about confusing things like p-values or confidence intervals.
The problem with this thinking is that Bayesian updating requires the choice of a particular statistical model, and this choice must be correct for Bayesian updating to work.
In a recent paper, leading Bayesians argue that doing Bayesian statistics well often requires using p-values and confidence intervals too:
To sum up, what Bayesian updating does when the model is false (i.e., in reality, always) is to try to concentrate the posterior on the best attainable approximations to the distribution of the data, ‘best’ being measured by likelihood. But depending on how the model is misspecified, and how represents the parameters of scientific interest, the impact of misspecification on inferring the latter can range from non-existent to profound. Since we are quite sure our models are wrong, we need to check whether the misspecification is so bad that inferences regarding the scientific parameters are in trouble. It is by this non-Bayesian checking of Bayesian models that we solve our principal–agent problem.