Gait recognition is a non-contact person identification method that utilizes cameras installed at a distance. However, gait images contain person-agnostic elements (covariates) such as clothing, and the removal of covariates is important for identification with high performance. Disentanglement representation learning, which separates gait-dependent information such as posture from covariates by unsupervised learning, has been attracting attention as a method to remove covariates. However, because the amount of gait data is negligible compared to other computer vision tasks, such as image recognition, the separation performance of existing methods is insufficient. In this study, we propose a gait recognition method to improve the separation performance, which augments the training data by adversarial generation based on identity features, separated by disentanglement representation learning. The proposed method first separates gait-dependent features (pose features) and appearance-related covariate features (style features) from gait videos based on disentanglement representation learning. Then, synthesized gait images are generated by exchanging pose features between gait images of the person under different walking conditions, followed by adding them to the training data. The experiments indicate that our method can improve the separation performance, and generate high-quality gait images that are effective for data augmentation.