Various authors have proposed probabilistic extensions of Valiant's PAC learning model in which the target to be learned is a conditional (or unconditional) probability distribution. In this paper, we improve upon the best known upper bounds on the sample complexity of learning an important class of stochastic rules called ‘stochastic rules with finite partitioning’ with respect to the classic notion of distance between distributions, the Kullback-Leibler divergence (KL-divergence). In particular, we improve the upper bound of order O(1/e2) due to Abe, Takeuchi, and Warmuth  to a bound of order O(1/e). Our proof technique is interesting for at least two reasons: First, previously known upper bounds with respect to the KL-divergence were obtained using the uniform convergence technique, while our improved upper bound is obtained by taking advantage of the properties of the maximum likelihood estimator. Second, our proof relies on the fact that only a linear number of examples are required in order to distinguish a true parametric model from a bad parametric model. The latter notion is apparently related to the notion of discrimination proposed and studied by Yamanishi, but the exact relationship is yet to be determined.