This paper presents a spatio-temporal 3D gait database and a view independent person identification method from gait. In case that a target changes one's walking direction compared with that in a database, the correct classification rate is reduced due to the appearance change. To deal with this problem, several methods based on a view transformation model, which converts walking images from one direction to virtual images from different viewpoints, have been proposed. However, the converted image may not coincide the real one, since the target is not included in the training dataset to obtain the transformation model. So we propose a view independent person identificationmethod which creates a database with virtual images synthesized directly from the target's 3D model. In the proposed method, firstly we built a spatio-temporal 3D gait database using multiple cameras, which consists of sequential 3D models of multiple walking people. Then virtual images from multiple arbitrary viewpoints are synthesized from 3D models, and affine moment invariants are derived from virtual images as gait features. In the identification phase, images of a target who walks in an arbitrary direction are taken from one camera, and then gait features are calculated. Finally the person is identified and one's walking direction is estimated. Experiments using the spatio-temporal 3D gait database show the effectiveness of the proposed method.