Skip to content

Commit

Permalink
added readme and requirements
Browse files Browse the repository at this point in the history
  • Loading branch information
prithvimk committed Feb 6, 2022
1 parent c284ea0 commit 6e57e88
Show file tree
Hide file tree
Showing 3 changed files with 39 additions and 1 deletion.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
venv
venv
gestures.csv
32 changes: 32 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Sign Language Detection

**A Work in Progress**

This project aims to create a Machine Learning model that can translate Indian Sign Language to English text and act as a simple medium of communication for people unfamiliar with sign language.

The hand recognition is done using [MediaPipe Hands solution](https://google.github.io/mediapipe/solutions/hands.html) in Python.

Tutorials that I referred:
1. [Real-time Hand Gesture Recognition using TensorFlow & OpenCV](https://techvidvan.com/tutorials/hand-gesture-recognition-tensorflow-opencv/)
2. [Python: Hand landmark estimation with MediaPipe](https://techtutorialsx.com/2021/04/10/python-hand-landmark-estimation/)

Currently, only dataset creation has been implemented ([save_gestures.py](save_gestures.py))

**Instructions to create dataset**

1. Create virtual environment using
```virtualenv``` and activate it.
2. Run: ```pip install -r requirements.txt```
3. To just play around with the hand detection, run [hand_recognition.py](hand_recognition.py)
4. To start creating the dataset, run [save_gestures.py](save_gestures.py)
5. Press 'C' on your keyboard to start capturing the gesture.
6. Enter the name of the gesture in the terminal.
7. Raise your hand in front of the camera while making the gesture and it will automatically start capturing pixel coordinates of the landmarks that are being detected.
8. After number of datapoints recorded equals ``` TOTAL_DATAPOINTS```, code will stop capturing.
9. Press 'C' to start recording a new gesture or press 'Q' to terminate the program.

## **To-Do**
---
1. Study more about ISL and decide what changes need to be made.
2. Test out different machine learning models and architectures.
3. Work on deployment.
5 changes: 5 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
mediapipe==0.8.9.1
numpy==1.22.2
opencv_python==4.5.5.62
pandas==1.4.0
tensorflow==2.8.0

0 comments on commit 6e57e88

Please sign in to comment.