A sign-language recognition system should use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We designed a system that first selects possible words by using the detected global features, then narrows the choices down to one by using the detected local features. In this paper, we describe an adequate local feature recognizer for a sign-language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols corresponding to clusters by using a clustering technique. The clusters are created from a training set of extracted hand images so that images with a similar appearance can be classified into the same cluster in an eigenspace. Experimental results showed that our system can recognize a signed word even in two-handed and hand-to-hand contact cases.
|Number of pages||10|
|Journal||Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers|
|Publication status||Published - 2000|
All Science Journal Classification (ASJC) codes
- Media Technology
- Computer Science Applications
- Electrical and Electronic Engineering