Mixing-time regularized policy gradient

Tetsuro Morimura, Takayuki Osogami, Tomoyuki Shirai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Policy gradient reinforcement learning (PGRL) has been receiving substantial attention as a mean for seeking stochastic policies that maximize cumulative reward. However, the learning speed of PGRL is known to decrease substantially when PGRL explores the policies that give the Markov chains having long mixing time. We study a new approach of regularizing how the PGRL explores the policies by the use of the hitting time of the Markov chains. The hitting time gives an upper bound on the mixing time, and the proposed approach improves the learning efficiency by keeping the mixing time of the Markov chains short. In particular, we propose a method of temporal-difference learning for estimating the gradient of the hitting time. Numerical experiments show that the proposed method outperforms conventional methods of PGRL.

Original languageEnglish
Title of host publicationProceedings of the National Conference on Artificial Intelligence
PublisherAI Access Foundation
Pages1997-2003
Number of pages7
ISBN (Electronic)9781577356790
Publication statusPublished - 2014
Event28th AAAI Conference on Artificial Intelligence, AAAI 2014, 26th Innovative Applications of Artificial Intelligence Conference, IAAI 2014 and the 5th Symposium on Educational Advances in Artificial Intelligence, EAAI 2014 - Quebec City, Canada
Duration: Jul 27 2014Jul 31 2014

Publication series

NameProceedings of the National Conference on Artificial Intelligence
Volume3

Other

Other28th AAAI Conference on Artificial Intelligence, AAAI 2014, 26th Innovative Applications of Artificial Intelligence Conference, IAAI 2014 and the 5th Symposium on Educational Advances in Artificial Intelligence, EAAI 2014
CountryCanada
CityQuebec City
Period7/27/147/31/14

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Mixing-time regularized policy gradient'. Together they form a unique fingerprint.

  • Cite this

    Morimura, T., Osogami, T., & Shirai, T. (2014). Mixing-time regularized policy gradient. In Proceedings of the National Conference on Artificial Intelligence (pp. 1997-2003). (Proceedings of the National Conference on Artificial Intelligence; Vol. 3). AI Access Foundation.