This paper proposes a new method for discovering multiple subgoals automatically to accelerate reinforcement learning. There have been proposed several methods for discovery of subgoals. Some use state visiting frequencies in the trajectories that reach the goal state. When a state visiting frequency is very high, this state is regarded as the subgoal. Because this kind of methods need that the goal state is reached many times to collect trajectories, they take a long time for discovering subgoals. In addition, they cannot discover the potential subgoals that will become appropriate subgoals when the goal state changes. On the other hand, some methods identify subgoals by partitioning local state transition graphs. But this kind of methods require large calculation amounts. We propose a new method that solves the above drawbacks. The new method utilizes state visiting frequencies. But we collect trajectories that go through particular non-goal states selected at random. For each particular state, trajectories are collected. Most of the trajectories reach the particular state more easily that the goal state. Therefore, it is expected that we can discover subgoals quickly and discover multiple subgoals together.