Download PDFOpen PDF in browserCurrent versionAmerican Sign Language recognition using Convolution Neural Network for Raspberry PiEasyChair Preprint 2957, version 112 pages•Date: March 14, 2020AbstractHand gestures represent a vast amount of information that can be used for basic communication by people with disabilities as well as augment communication of others. As the information of hand gestures rely on movement sequences, identifying hand gestures with accuracy in real-time is challenging. In the domain of human-computer interaction, hand gesture recognition models have been developed for mouse pointer movement, playing games with specific actions etc. Various techniques such as HOG transforms, SIFT, BRIEF, ORB have been used to identify region of interest and for classifying the region of interest techniques such as Support Vector Machines (SVMs), Hidden Markov Models (HMMs) etc. are being used. These methods demand heavy computational resources. This paper presents a novel method for understanding American Sign Language (ASL) using image pre-processing methods and a Convolution Neural Network (CNN) for classification that is implemented in Raspberry Pi3. Images captured from the Raspberry Pi camera module are pre-processed for better clarity and region of interest isolation so that better set of features are extracted. These features are then fed into CNN for classification. Executing the model on Raspberry Pi3 has resulted into a satisfactory output as the classification result and time taken by the system has been acceptable to end-users. Keyphrases: American Sign Language, Convolution Neural Network, Hand Gesture Recognition, Raspberry Pi, image processing
|