This paper presents a person identification technique that uses information from person's shadow, and is robust to appearance changes caused by variations of clothes and carried objects. The technique uses invisible lights and resulting shadows and has advantages from undetected sensing. The shadows on the ground obtained through illumination by multiple lights can be considered as silhouettes captured by multiple virtual cameras placed at light positions. Thus, a single camera, e.g. in the ceiling, is able to obtain multiple silhouettes, equivalent to a multi-camera system. If the person's appearance changes compared to the training cases in the database, e.g. by wearing different clothes or carrying a/another bag, then the identification performance gets worse. To deal with this problem, we introduce a new shadow-based identification technique robust to appearance changes. Firstly, we divide each shadow area into several parts, and estimate the discrimination capability for each part based on gait features between gallery datasets and probe dataset. Next, according to the estimated capability, we adaptively control the priorities of these parts in the person identification method. We constructed a new shadow database with a variety of clothes and bags, and carried out successful experiments to verify the effectiveness of the proposed technique.