Reinforcement learning for high-dimensional problems with symmetrical actions

M. A.S. Kamal, Junichi Murata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

A reinforcement learning algorithm is proposed that can cope with high dimensionality for a class of problems with symmetrical actions. The action selection does not need considering all the states but only needs looking at a part of the states. Moreover, every symmetrical action is related to the same kind of part of state, and thus the value function can be shared, which greatly reduces the reinforcement learning problem size. The overall learning algorithm is equivalent to the standard reinforcement learning algorithm. Simulation results and other aspects are compared with standard and other reinforcement learning algorithms. Reduction in dimensionality, much faster convergence without worsening other objectives show the effectiveness of the proposed mechanism on a high dimensional optimization problem having symmetrical actions.

Original languageEnglish
Title of host publication2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004
Pages6192-6197
Number of pages6
DOIs
Publication statusPublished - Dec 1 2004
Event2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004 - The Hague, Netherlands
Duration: Oct 10 2004Oct 13 2004

Publication series

NameConference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
Volume7
ISSN (Print)1062-922X

Other

Other2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004
CountryNetherlands
CityThe Hague
Period10/10/0410/13/04

Fingerprint

Reinforcement learning
Learning algorithms

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Cite this

Kamal, M. A. S., & Murata, J. (2004). Reinforcement learning for high-dimensional problems with symmetrical actions. In 2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004 (pp. 6192-6197). (Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics; Vol. 7). https://doi.org/10.1109/ICSMC.2004.1401371

Reinforcement learning for high-dimensional problems with symmetrical actions. / Kamal, M. A.S.; Murata, Junichi.

2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004. 2004. p. 6192-6197 (Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics; Vol. 7).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kamal, MAS & Murata, J 2004, Reinforcement learning for high-dimensional problems with symmetrical actions. in 2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004. Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, vol. 7, pp. 6192-6197, 2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004, The Hague, Netherlands, 10/10/04. https://doi.org/10.1109/ICSMC.2004.1401371
Kamal MAS, Murata J. Reinforcement learning for high-dimensional problems with symmetrical actions. In 2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004. 2004. p. 6192-6197. (Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics). https://doi.org/10.1109/ICSMC.2004.1401371
Kamal, M. A.S. ; Murata, Junichi. / Reinforcement learning for high-dimensional problems with symmetrical actions. 2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004. 2004. pp. 6192-6197 (Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics).
@inproceedings{7792c1d2a0b04119a0a7be7b6286ec2b,
title = "Reinforcement learning for high-dimensional problems with symmetrical actions",
abstract = "A reinforcement learning algorithm is proposed that can cope with high dimensionality for a class of problems with symmetrical actions. The action selection does not need considering all the states but only needs looking at a part of the states. Moreover, every symmetrical action is related to the same kind of part of state, and thus the value function can be shared, which greatly reduces the reinforcement learning problem size. The overall learning algorithm is equivalent to the standard reinforcement learning algorithm. Simulation results and other aspects are compared with standard and other reinforcement learning algorithms. Reduction in dimensionality, much faster convergence without worsening other objectives show the effectiveness of the proposed mechanism on a high dimensional optimization problem having symmetrical actions.",
author = "Kamal, {M. A.S.} and Junichi Murata",
year = "2004",
month = "12",
day = "1",
doi = "10.1109/ICSMC.2004.1401371",
language = "English",
isbn = "0780385667",
series = "Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics",
pages = "6192--6197",
booktitle = "2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004",

}

TY - GEN

T1 - Reinforcement learning for high-dimensional problems with symmetrical actions

AU - Kamal, M. A.S.

AU - Murata, Junichi

PY - 2004/12/1

Y1 - 2004/12/1

N2 - A reinforcement learning algorithm is proposed that can cope with high dimensionality for a class of problems with symmetrical actions. The action selection does not need considering all the states but only needs looking at a part of the states. Moreover, every symmetrical action is related to the same kind of part of state, and thus the value function can be shared, which greatly reduces the reinforcement learning problem size. The overall learning algorithm is equivalent to the standard reinforcement learning algorithm. Simulation results and other aspects are compared with standard and other reinforcement learning algorithms. Reduction in dimensionality, much faster convergence without worsening other objectives show the effectiveness of the proposed mechanism on a high dimensional optimization problem having symmetrical actions.

AB - A reinforcement learning algorithm is proposed that can cope with high dimensionality for a class of problems with symmetrical actions. The action selection does not need considering all the states but only needs looking at a part of the states. Moreover, every symmetrical action is related to the same kind of part of state, and thus the value function can be shared, which greatly reduces the reinforcement learning problem size. The overall learning algorithm is equivalent to the standard reinforcement learning algorithm. Simulation results and other aspects are compared with standard and other reinforcement learning algorithms. Reduction in dimensionality, much faster convergence without worsening other objectives show the effectiveness of the proposed mechanism on a high dimensional optimization problem having symmetrical actions.

UR - http://www.scopus.com/inward/record.url?scp=15744368771&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=15744368771&partnerID=8YFLogxK

U2 - 10.1109/ICSMC.2004.1401371

DO - 10.1109/ICSMC.2004.1401371

M3 - Conference contribution

AN - SCOPUS:15744368771

SN - 0780385667

T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics

SP - 6192

EP - 6197

BT - 2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004

ER -