Decentralized reinforcement learning control and emergence of motion patterns

M. Svinin, K. Yamada, K. Okhura, K. Ueda

Research output: Contribution to journalConference article

1 Citation (Scopus)

Abstract

In this paper we propose a system for studying emergence of motion patterns in autonomous mobile robotic systems. The system implements an instance-based reinforcement learning control. Three spaces are of importance in formulation of the control scheme. They are the work space, the sensor space, and the action space. Important feature of our system is that all these spaces are assumed to be continuous. The core part of the system is a classifier system. Based on the sensory state space analysis. The control is decentralized and is specified and is specified at the lowest level of the control system. However, the local controllers are implicitly connected through the perceived environment information. Therefore, they constitute a dynamic environment with respect to each other. The proposed control scheme is tested under simulation for a mobile robot in a navigation task. It is shown that some patterns of global behavior - such as collision avoidance, wall-following, light-seeking - can emerge from the local controllers.

Original languageEnglish
Pages (from-to)223-234
Number of pages12
JournalProceedings of SPIE - The International Society for Optical Engineering
Volume3523
DOIs
Publication statusPublished - Dec 1 1998
EventSensor Fusion and Decentralized Control in Robotic Systems IV - Boston, MA, United States
Duration: Nov 2 1998Nov 3 1998

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Cite this