The Sign Language Recognition System has been designed to capture video input, process it to detect hand gestures, and translate these gestures into readable text. The project consists of several key components and steps: Video Processing: Using OpenCV, the system captures frames from the video input. MediaPipe processes these frames to detect and track hand landmarks in real time. OpenCV capabilities allow for efficient frame extraction and basic image processing tasks such as resizing and normalization. Hand Detection and Tracking: MediaPipe pre models identify and track hand movements within the video frames. The accurate detection and tracking of the hand movements are critical for the subsequent recognition of the sign language gestures. Sign Language Recognition: The core system is the deep learning model, trained using the TensorFlow and Keras on a dataset of sign language gestures. The model learns to classify the detected hand movements into corresponding sign language characters or words. Convolutional Neural Networks (CNNs) are typically used for task due to their effectiveness in image recognition tasks. Text Display: Once the system recognizes the signs, it converts them into text and displays the output. This can be done through a console output or a graphical user interface (GUI) built withinter. The GUI provides a user friendly Tkinter. The GUI provides a user friendly experience, allowing users to see the translated text in real time.
Keywords: Sign Language Recognition, Video Processing,
OpenCV, MediaPipe, Hand Detection, Hand Tracking,
lutional Neural Networks (CNNs), Graphical
Sign Language Recognition, Video Processing,
OpenCV, MediaPipe, Hand Detection, Hand Tracking,
Convolutional Neural Networks (CNNs), Graphical
User Interface (GUI).