### Abstract

The relative g-entropy of two finite, discrete probability distributions x = (x_{1},...,x_{n}) and y = (y_{1},...,y_{n}) is defined as H_{g}(x,y) = Σ_{k}x_{k}g (y_{k}/k_{k} - 1), where g:(-1,∞)→R is convex and g(0) = 0. When g(t) = -log(1 + t), then H_{g}(x,y) = Σ_{k}x_{k}log(x_{k}/y_{k}), the usual relative entropy. Let P_{n} = {x ∈ R^{n} : σ_{i}x_{i} = 1, x_{i} > 0 ∀i}. Our major results is that, for any m × n column-stochastic matrix A, the contraction coefficient defined as η_{ǧ}(A) = sup{H_{g}(Ax,Ay)/H_{g}(x,y) : x,y ∈ P_{n}, x ≠ y} satisfies η_{g}(A) ≤1 - α(A), where α(A) = min_{j,k}Σ_{i} min(a_{ij}, a_{ik}) is Dobrushin's coefficient of ergodicity. Consequently, η_{g}(A) < 1 if and only if A is scrambling. Upper and lower bounds on α_{g}(A) are established. Analogous results hold for Markov chains in continuous time.

Original language | English |
---|---|

Pages (from-to) | 211-235 |

Number of pages | 25 |

Journal | Linear Algebra and Its Applications |

Volume | 179 |

Issue number | C |

DOIs | |

Publication status | Published - Jan 15 1993 |

### All Science Journal Classification (ASJC) codes

- Algebra and Number Theory
- Numerical Analysis
- Geometry and Topology
- Discrete Mathematics and Combinatorics

## Fingerprint Dive into the research topics of 'Relative entropy under mappings by stochastic matrices'. Together they form a unique fingerprint.

## Cite this

*Linear Algebra and Its Applications*,

*179*(C), 211-235. https://doi.org/10.1016/0024-3795(93)90331-H