A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. In this paper, we present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases.
|Number of pages||5|
|Journal||Proceedings - International Conference on Pattern Recognition|
|Publication status||Published - Dec 1 2000|
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition