This paper describes a vision based human figure motion control. Our purpose is to do seamless mapping of human motion in the real world into virtual environments. With the aim of making computing systems suited for users, we have developed a vision based human motion analysis and synthesis method. The human motion analysis method is implemented by blob tracking, and the motion synthesis method is focused on generating realistic motion from a limited number of blobs. This synthesis method is realized by using physical constraints and the other constraints. In order to realize more realistic motion synthesis, we introduce additional constraints in the synthesis method. We have estimated good constraints by analyzing real motion capture data. As a PUI application, we have applied these methods to real-time 3D interaction such as 3D direct manipulation interfaces.