Signed language

Hand Gesture Recognition (HGR) Overview

  • HGR enhances human-computer interaction (HCI).

  • Importance of accurate character recognition for effective communication.

Deep Learning Approach

  • Utilization of Convolutional Neural Networks (CNNs): Modified AlexNet and VGG16.

  • Focus on recognizing American Sign Language (ASL) characters (both alphabets and numerals).

  • Features extracted via pre-trained CNN models followed by classification using Support Vector Machine (SVM).

Performance Evaluation

  • Achieved recognition accuracy: 99.82%, surpassing many state-of-the-art methods.

  • Two validation methods used:

    • Leave-one-subject-out.

    • Random 70-30 split.

Gesture Recognition Challenges

  • High inter-class similarity poses challenges for recognition accuracy.

  • Some characters often misclassified due to similar gestures.

Experimental Analysis

  • Dataset: 36 ASL characters with a total of 2520 images from 5 subjects.

  • Resizing of images for compatibility with CNN inputs.

  • Data augmentation applied to expand training dataset.

System Architecture

  • SVM used for classification, leveraging limited memory by focusing on support vectors.

  • Highlighted need for robust recognition strategies across similar gestures.

Future Directions

  • Explore attention-based CNN architectures to improve differentiation among similar gestures.