Motivation. Roughly speaking, a weakly stabilizing system S executed under a probabilistic scheduler ρ is probabilistically self-stabilizing, in the sense that any execution eventually reaches a legitimate execution with probability 1 [1-3]. Here ρ is a set of Markov chains, one of which is selected for S by an adversary to generate as its evolution an infinite activation sequence to execute S. The performance measure is the worst case expected convergence time τ S,M when S is executed under a Markov chain M ∈ ρ. Let τ S,ρ = sup Mερ τ S,M. Then S can be "comfortably" used as a probabilistically self-stabilizing system under ρ only if τ S,ρ < ∞. There are S and ρ such that τ S,ρ = ∞, despite that τ S,M < ∞ for any M ∈ ρ. Somewhat interesting is that, for some S, there is a randomised version S* of S such that τ S*,ρ < ∞, despite that τ S,ρ = ∞, i.e., randomization helps. This motivates a characterization of S that satisfies τ S*,ρ < ∞.