Abstract
Emergence of stable gaits in locomotion robots is studied in this paper. A classifier system, implementing an instance-based reinforcement learning scheme, is used for sensory-motor control of an eight-legged mobile robot. Important feature of the classifier system is its ability to work with the continuous sensor space. The robot does not have a priori knowledge of the environment, its own internal model, and the goal coordinates. It is only assumed that the robot can acquire stable gaits by learning how to reach a light source. During the learning process the control system is self-organized by reinforcement signals. Reaching the light source defines a global reward. Forward motion gets a local reward, while stepping back and falling down get a local punishment. Feasibility of the proposed self-organized system is tested under simulation and experiment. The control actions are specified at the leg level. It is shown that, as learning progresses, the number of the action rules in the classifier systems is stabilized to a certain level, corresponding to the acquired gait patterns.
Original language | English |
---|---|
Pages (from-to) | 180-190 |
Number of pages | 11 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 3839 |
Publication status | Published - Dec 1 1999 |
Event | Proceedings of the 1999 Sensor Fusion and Decentralized Control in Robotic Systems II - Boston, MA, USA Duration: Sept 19 1999 → Sept 20 1999 |
All Science Journal Classification (ASJC) codes
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering