On-line estimation of hidden Markov model parameters

Jun Mizuno, Tatsuya Watanabe, Kazuya Ueki, Kazuyuki Amano, Eiji Takimoto, Akira Maruoka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

In modeling various signals such as the speech signal by using the Hidden Markov Model (HMM), it is often required to adapt not only to the inherent nonstationarity of the signal, but to changes of sources (speakers) who yield the signal. The well known Baum-Welch algorithm tries to adjust HMM so as to optimize the fit between the model and the signal observed. In this paper we develop an algorithm, which we call the on-line Baum-Welch algorithm, by incorporating the learning rate into the off-line Baum-Welch algorithm. The algorithm performs in a series of trials. In each trial the algorithm somehow produces an HMM Mt, then receives a symbol sequence wt, incurring loss -ln Pr(wt|Mt) which is the negative log-likelihood of the HMM Mt evaluated at wt. The performance of the algorithm is measured by the additional total loss, which is called the regret, of the algorithm over the total loss of a standard algorithm, where the standard algorithm is taken to be a criterion for measuring the relative loss. We take the off-line Baum-Welch algorithm as such a standard algorithm. To evaluate the performance of an algorithm, we take the Gradient Descent algorithm. Our experiments show that the on-line Baum-Welch algorithm performs well as compared to the Gradient Descent algorithm. We carry out the experiments not only for artificial data, but for some reasonably realistic data which is made by transforming acoustic waveforms to symbol sequences through the vector quantization method. The results show that the on-line Baum-Welch algorithm adapts the change of speakers very well.

Original languageEnglish
Title of host publicationDiscovery Science - 3rd International Conference, DS 2000, Proceedings
PublisherSpringer Verlag
Pages155-169
Number of pages15
ISBN (Print)9783540413523
Publication statusPublished - Jan 1 2000
Externally publishedYes
Event3rd International Conference on Discovery Science, DS 2000 - Kyoto, Japan
Duration: Dec 4 2000Dec 6 2000

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1967
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other3rd International Conference on Discovery Science, DS 2000
CountryJapan
CityKyoto
Period12/4/0012/6/00

Fingerprint

Hidden Markov models
Markov Model
Descent Algorithm
Gradient Algorithm
Gradient Descent
Learning Rate
Nonstationarity
Regret
Vector Quantization
Line
Speech Signal
Vector quantization
Waveform
Experiment
Likelihood
Acoustics

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Mizuno, J., Watanabe, T., Ueki, K., Amano, K., Takimoto, E., & Maruoka, A. (2000). On-line estimation of hidden Markov model parameters. In Discovery Science - 3rd International Conference, DS 2000, Proceedings (pp. 155-169). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1967). Springer Verlag.

On-line estimation of hidden Markov model parameters. / Mizuno, Jun; Watanabe, Tatsuya; Ueki, Kazuya; Amano, Kazuyuki; Takimoto, Eiji; Maruoka, Akira.

Discovery Science - 3rd International Conference, DS 2000, Proceedings. Springer Verlag, 2000. p. 155-169 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1967).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Mizuno, J, Watanabe, T, Ueki, K, Amano, K, Takimoto, E & Maruoka, A 2000, On-line estimation of hidden Markov model parameters. in Discovery Science - 3rd International Conference, DS 2000, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1967, Springer Verlag, pp. 155-169, 3rd International Conference on Discovery Science, DS 2000, Kyoto, Japan, 12/4/00.
Mizuno J, Watanabe T, Ueki K, Amano K, Takimoto E, Maruoka A. On-line estimation of hidden Markov model parameters. In Discovery Science - 3rd International Conference, DS 2000, Proceedings. Springer Verlag. 2000. p. 155-169. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Mizuno, Jun ; Watanabe, Tatsuya ; Ueki, Kazuya ; Amano, Kazuyuki ; Takimoto, Eiji ; Maruoka, Akira. / On-line estimation of hidden Markov model parameters. Discovery Science - 3rd International Conference, DS 2000, Proceedings. Springer Verlag, 2000. pp. 155-169 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{688668affb5c40fcb8e247f7b13c9d1a,
title = "On-line estimation of hidden Markov model parameters",
abstract = "In modeling various signals such as the speech signal by using the Hidden Markov Model (HMM), it is often required to adapt not only to the inherent nonstationarity of the signal, but to changes of sources (speakers) who yield the signal. The well known Baum-Welch algorithm tries to adjust HMM so as to optimize the fit between the model and the signal observed. In this paper we develop an algorithm, which we call the on-line Baum-Welch algorithm, by incorporating the learning rate into the off-line Baum-Welch algorithm. The algorithm performs in a series of trials. In each trial the algorithm somehow produces an HMM Mt, then receives a symbol sequence wt, incurring loss -ln Pr(wt|Mt) which is the negative log-likelihood of the HMM Mt evaluated at wt. The performance of the algorithm is measured by the additional total loss, which is called the regret, of the algorithm over the total loss of a standard algorithm, where the standard algorithm is taken to be a criterion for measuring the relative loss. We take the off-line Baum-Welch algorithm as such a standard algorithm. To evaluate the performance of an algorithm, we take the Gradient Descent algorithm. Our experiments show that the on-line Baum-Welch algorithm performs well as compared to the Gradient Descent algorithm. We carry out the experiments not only for artificial data, but for some reasonably realistic data which is made by transforming acoustic waveforms to symbol sequences through the vector quantization method. The results show that the on-line Baum-Welch algorithm adapts the change of speakers very well.",
author = "Jun Mizuno and Tatsuya Watanabe and Kazuya Ueki and Kazuyuki Amano and Eiji Takimoto and Akira Maruoka",
year = "2000",
month = "1",
day = "1",
language = "English",
isbn = "9783540413523",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "155--169",
booktitle = "Discovery Science - 3rd International Conference, DS 2000, Proceedings",
address = "Germany",

}

TY - GEN

T1 - On-line estimation of hidden Markov model parameters

AU - Mizuno, Jun

AU - Watanabe, Tatsuya

AU - Ueki, Kazuya

AU - Amano, Kazuyuki

AU - Takimoto, Eiji

AU - Maruoka, Akira

PY - 2000/1/1

Y1 - 2000/1/1

N2 - In modeling various signals such as the speech signal by using the Hidden Markov Model (HMM), it is often required to adapt not only to the inherent nonstationarity of the signal, but to changes of sources (speakers) who yield the signal. The well known Baum-Welch algorithm tries to adjust HMM so as to optimize the fit between the model and the signal observed. In this paper we develop an algorithm, which we call the on-line Baum-Welch algorithm, by incorporating the learning rate into the off-line Baum-Welch algorithm. The algorithm performs in a series of trials. In each trial the algorithm somehow produces an HMM Mt, then receives a symbol sequence wt, incurring loss -ln Pr(wt|Mt) which is the negative log-likelihood of the HMM Mt evaluated at wt. The performance of the algorithm is measured by the additional total loss, which is called the regret, of the algorithm over the total loss of a standard algorithm, where the standard algorithm is taken to be a criterion for measuring the relative loss. We take the off-line Baum-Welch algorithm as such a standard algorithm. To evaluate the performance of an algorithm, we take the Gradient Descent algorithm. Our experiments show that the on-line Baum-Welch algorithm performs well as compared to the Gradient Descent algorithm. We carry out the experiments not only for artificial data, but for some reasonably realistic data which is made by transforming acoustic waveforms to symbol sequences through the vector quantization method. The results show that the on-line Baum-Welch algorithm adapts the change of speakers very well.

AB - In modeling various signals such as the speech signal by using the Hidden Markov Model (HMM), it is often required to adapt not only to the inherent nonstationarity of the signal, but to changes of sources (speakers) who yield the signal. The well known Baum-Welch algorithm tries to adjust HMM so as to optimize the fit between the model and the signal observed. In this paper we develop an algorithm, which we call the on-line Baum-Welch algorithm, by incorporating the learning rate into the off-line Baum-Welch algorithm. The algorithm performs in a series of trials. In each trial the algorithm somehow produces an HMM Mt, then receives a symbol sequence wt, incurring loss -ln Pr(wt|Mt) which is the negative log-likelihood of the HMM Mt evaluated at wt. The performance of the algorithm is measured by the additional total loss, which is called the regret, of the algorithm over the total loss of a standard algorithm, where the standard algorithm is taken to be a criterion for measuring the relative loss. We take the off-line Baum-Welch algorithm as such a standard algorithm. To evaluate the performance of an algorithm, we take the Gradient Descent algorithm. Our experiments show that the on-line Baum-Welch algorithm performs well as compared to the Gradient Descent algorithm. We carry out the experiments not only for artificial data, but for some reasonably realistic data which is made by transforming acoustic waveforms to symbol sequences through the vector quantization method. The results show that the on-line Baum-Welch algorithm adapts the change of speakers very well.

UR - http://www.scopus.com/inward/record.url?scp=84974667029&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84974667029&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9783540413523

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 155

EP - 169

BT - Discovery Science - 3rd International Conference, DS 2000, Proceedings

PB - Springer Verlag

ER -