Saddlepoint conditions on a predictor are introduced and developed to reconfirm the need for the assumption of a prior distribution in constructing a useful inferential procedure. A condition yields that the predictor induced from the maximum likelihood estimator is the worst under a loss, while the predictor induced from a suitable posterior mean is the best. This result indicates the promising role of Bayesian criteria, such as the deviance information criterion (DIC). As an implication, we critique the conventional empirical Bayes method because of its partial assumption of a prior distribution.
!!!All Science Journal Classification (ASJC) codes