Static Hand Gesture Recognition based on fusion of Moments
Subhamoy Chatterjee, Dipak Kumar Ghosh and Samit Ari
Department of Electronics and Communication Engineering
National Institute of Technology, Rourkela
Rourkela- 769008, Odisha
Abstract. A vision based static hand gesture recognition algorithm which consists of three stages: pre-processing, feature extraction and classification, is presented in this work. The pre-processing stage comprises of following three sub-stages: segmentation, which segments hand region from its background using YCbCr skin colour based segmentation process; rotation, that rotates segmented gesture to make the algorithm, rotation invariant; Morphological filtering, that removes background and object noise. Non-orthogonal moments like geometric moments and orthogonal moments like Tchebichef and Krawtchouk moments are used here as features. To improve the performance of classification, two feature fusion strategies are proposed in this work; serial feature fusion and parallel feature fusion. A feed-forward multi-layer perceptron (MLP) based artificial neural network classifier is proposed. A user-independent experiment is conducted on 1500 gestures of 10 classes for 10 different users.
Keywords: American Sign Language digits, Geometric moment, Tchebichef Moment, Krawtchouk moment, Serial feature fusion, Parallel feature fusion, Artificial Neural Network.
Static hand gesture recognition algorithms have been divided mostly into vision-based techniques ,  and glove-based techniques [2-3]. However, vision-based techniques are preferred to glove-based techniques because vision-based techniques are less complex compared to glove-based techniques. The vision-based gesture recognition systems are of two types; contour-based and shape-based. In a recent work , authors reported a robust static hand gesture recognition system using...