A policy representation using weighted multiple normal distribution real-time reinforcement learning feasible for varying optimal actions

Hajime Kimura, Takeshi Aramaki, Shigenobu Kobayashi

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

In this paper, we challenge to solve a reinforcement learning problem for a 5-linked ring robot within a real-time so that the real-robot can stand up to the trial and error. On this robot, incomplete perception problems are caused from noisy sensors and cheap position-control motor systems. This incomplete perception also causes varying optimum actions with the progress of the learning. To cope with this problem, we adopt an actor-critic method, and we propose a new hierarchical policy representation scheme, that consists of discrete action selection on the top level and continuous action selection on the low level of the hierarchy. The proposed hierarchical scheme accelerates learning on continuous action space, and it can pursue the optimum actions varying with the progress of learning on our robotics problem. This paper compares and discusses several learning algorithms through simulations, and demonstrates the proposed method showing application for the real robot.

Original languageEnglish
Pages (from-to)316-324
Number of pages9
JournalTransactions of the Japanese Society for Artificial Intelligence
Volume18
Issue number6
DOIs
Publication statusPublished - Dec 1 2003
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'A policy representation using weighted multiple normal distribution real-time reinforcement learning feasible for varying optimal actions'. Together they form a unique fingerprint.

Cite this