It is well known that the classical Bayesian posterior arises naturally as the unique solution of different optimization problems, without the necessity of interpreting data as conditional probabilities and then using Bayes’ Theorem. Here it is shown that the Bayesian posterior is also the unique minimax optimizer of the loss of self-information in combining the prior and the likelihood distributions, and is the unique proportional consolidation of the same distributions. These results, direct corollaries of recent results about conflations of probability distributions, further reinforce the use of Bayesian posteriors, and may help partially reconcile some of the differences between classical and Bayesian statistics.

Number of Pages



URL: https://digitalcommons.calpoly.edu/rgp_rsr/92