Reinforcement learning (RL) can be applied to a wide class of problems because it requires no other information than perceived states and rewards to find good action policies. However, it takes a large number of trials before acquiring the optimal policy. In order to make RL faster, use of subgoals is proposed. Since errors and ambiguity are inevitable in subgoal information provided by human designers, a mechanism is proposed that controls use of subgoals. The method is applied to examples and the results show that use of subgoals is very effective in accelerating RL and that the proposed control mechanism successfully suppresses possible critical damages on the RL performance caused by errors and ambiguity in subgoal information.