A reinforcement learning algorithm is proposed that can cope with high dimensionality for a class of problems with symmetrical actions. The action selection does not need considering all the states but only needs looking at a part of the states. Moreover, every symmetrical action is related to the same kind of part of state, and thus the value function can be shared, which greatly reduces the reinforcement learning problem size. The overall learning algorithm is equivalent to the standard reinforcement learning algorithm. Simulation results and other aspects are compared with standard and other reinforcement learning algorithms. Reduction in dimensionality, much faster convergence without worsening other objectives show the effectiveness of the proposed mechanism on a high dimensional optimization problem having symmetrical actions.