Kaushik066 commited on
Commit
e35f173
·
1 Parent(s): 8435477

Update about.md

Browse files
Files changed (1) hide show
  1. about.md +2 -2
about.md CHANGED
@@ -7,11 +7,11 @@ this technology ensures seamless interaction. Additionally, the product includes
7
  Our solution uses advanced AI technology to process videos and identify sign language gestures. The process begins by extracting pose coordinates, which include the positions of the hands, face, and body edges, from each frame of the video.
8
  These coordinates act as a blueprint of the movements and gestures performed by the person in the video. By analyzing these hand movements in detail, the AI model identifies the gestures being made and matches them to the most likely English word associated with that specific sign.
9
  For instance, the image provided illustrates a person performing the gesture for the word "Student," demonstrating the system's ability to interpret and translate sign language gestures into meaningful English words.
10
- ![ISL representation of word Student](hand_landmark.png)
11
 
12
  In addition to recognizing gestures, our solution also visualizes them through animated motion videos. During the AI model's training phase, face and hand coordinates are collected from the videos to create dynamic animations that represent all the sign language gestures the AI has learned to recognize.
13
  These animations serve as a visual guide, making it easier for users to understand and learn sign language gestures. For example, the motion video below demonstrates how the word "Student" is represented in sign language, showcasing both the accuracy and clarity of the system's animated outputs.
14
- ![Animation of word Student in ISL](sign_animation.png)
15
 
16
  # Who are our target audience?
17
  - Hearing-Impaired Individuals: To empower them with a tool that facilitates communication with those unfamiliar with sign language.
 
7
  Our solution uses advanced AI technology to process videos and identify sign language gestures. The process begins by extracting pose coordinates, which include the positions of the hands, face, and body edges, from each frame of the video.
8
  These coordinates act as a blueprint of the movements and gestures performed by the person in the video. By analyzing these hand movements in detail, the AI model identifies the gestures being made and matches them to the most likely English word associated with that specific sign.
9
  For instance, the image provided illustrates a person performing the gesture for the word "Student," demonstrating the system's ability to interpret and translate sign language gestures into meaningful English words.
10
+ ![ISL representation of word Student](static/hand_gesture_recognition.png)
11
 
12
  In addition to recognizing gestures, our solution also visualizes them through animated motion videos. During the AI model's training phase, face and hand coordinates are collected from the videos to create dynamic animations that represent all the sign language gestures the AI has learned to recognize.
13
  These animations serve as a visual guide, making it easier for users to understand and learn sign language gestures. For example, the motion video below demonstrates how the word "Student" is represented in sign language, showcasing both the accuracy and clarity of the system's animated outputs.
14
+ ![Animation of word Student in ISL](static/sign_animation.png)
15
 
16
  # Who are our target audience?
17
  - Hearing-Impaired Individuals: To empower them with a tool that facilitates communication with those unfamiliar with sign language.