### Abstract

The relative g-entropy of two finite, discrete probability distributions x = (x_{1},...,x_{n}) and y = (y_{1},...,y_{n}) is defined as H_{g}(x,y) = Σ_{k}x_{k}g (y_{k}/k_{k} - 1), where g:(-1,∞)→R is convex and g(0) = 0. When g(t) = -log(1 + t), then H_{g}(x,y) = Σ_{k}x_{k}log(x_{k}/y_{k}), the usual relative entropy. Let P_{n} = {x ∈ R^{n} : σ_{i}x_{i} = 1, x_{i} > 0 ∀i}. Our major results is that, for any m × n column-stochastic matrix A, the contraction coefficient defined as η_{ǧ}(A) = sup{H_{g}(Ax,Ay)/H_{g}(x,y) : x,y ∈ P_{n}, x ≠ y} satisfies η_{g}(A) ≤1 - α(A), where α(A) = min_{j,k}Σ_{i} min(a_{ij}, a_{ik}) is Dobrushin's coefficient of ergodicity. Consequently, η_{g}(A) < 1 if and only if A is scrambling. Upper and lower bounds on α_{g}(A) are established. Analogous results hold for Markov chains in continuous time.

Original language | English |
---|---|

Pages (from-to) | 211-235 |

Number of pages | 25 |

Journal | Linear Algebra and Its Applications |

Volume | 179 |

Issue number | C |

DOIs | |

Publication status | Published - Jan 15 1993 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Algebra and Number Theory
- Numerical Analysis
- Geometry and Topology
- Discrete Mathematics and Combinatorics

### Cite this

*Linear Algebra and Its Applications*,

*179*(C), 211-235. https://doi.org/10.1016/0024-3795(93)90331-H

**Relative entropy under mappings by stochastic matrices.** / Cohen, Joel E.; Iwasa, Yoh; Rautu, Gh; Beth Ruskai, Mary; Seneta, Eugene; Zbaganu, Gh.

Research output: Contribution to journal › Article

*Linear Algebra and Its Applications*, vol. 179, no. C, pp. 211-235. https://doi.org/10.1016/0024-3795(93)90331-H

}

TY - JOUR

T1 - Relative entropy under mappings by stochastic matrices

AU - Cohen, Joel E.

AU - Iwasa, Yoh

AU - Rautu, Gh

AU - Beth Ruskai, Mary

AU - Seneta, Eugene

AU - Zbaganu, Gh

PY - 1993/1/15

Y1 - 1993/1/15

N2 - The relative g-entropy of two finite, discrete probability distributions x = (x1,...,xn) and y = (y1,...,yn) is defined as Hg(x,y) = Σkxkg (yk/kk - 1), where g:(-1,∞)→R is convex and g(0) = 0. When g(t) = -log(1 + t), then Hg(x,y) = Σkxklog(xk/yk), the usual relative entropy. Let Pn = {x ∈ Rn : σixi = 1, xi > 0 ∀i}. Our major results is that, for any m × n column-stochastic matrix A, the contraction coefficient defined as ηǧ(A) = sup{Hg(Ax,Ay)/Hg(x,y) : x,y ∈ Pn, x ≠ y} satisfies ηg(A) ≤1 - α(A), where α(A) = minj,kΣi min(aij, aik) is Dobrushin's coefficient of ergodicity. Consequently, ηg(A) < 1 if and only if A is scrambling. Upper and lower bounds on αg(A) are established. Analogous results hold for Markov chains in continuous time.

AB - The relative g-entropy of two finite, discrete probability distributions x = (x1,...,xn) and y = (y1,...,yn) is defined as Hg(x,y) = Σkxkg (yk/kk - 1), where g:(-1,∞)→R is convex and g(0) = 0. When g(t) = -log(1 + t), then Hg(x,y) = Σkxklog(xk/yk), the usual relative entropy. Let Pn = {x ∈ Rn : σixi = 1, xi > 0 ∀i}. Our major results is that, for any m × n column-stochastic matrix A, the contraction coefficient defined as ηǧ(A) = sup{Hg(Ax,Ay)/Hg(x,y) : x,y ∈ Pn, x ≠ y} satisfies ηg(A) ≤1 - α(A), where α(A) = minj,kΣi min(aij, aik) is Dobrushin's coefficient of ergodicity. Consequently, ηg(A) < 1 if and only if A is scrambling. Upper and lower bounds on αg(A) are established. Analogous results hold for Markov chains in continuous time.

UR - http://www.scopus.com/inward/record.url?scp=0242380938&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0242380938&partnerID=8YFLogxK

U2 - 10.1016/0024-3795(93)90331-H

DO - 10.1016/0024-3795(93)90331-H

M3 - Article

VL - 179

SP - 211

EP - 235

JO - Linear Algebra and Its Applications

JF - Linear Algebra and Its Applications

SN - 0024-3795

IS - C

ER -