From 6e57e8884bf4ad1a39c55a225dbaf363e20bc52b Mon Sep 17 00:00:00 2001 From: Prithvi Krishna Date: Sun, 6 Feb 2022 21:50:52 +0530 Subject: [PATCH] added readme and requirements --- .gitignore | 3 ++- README.md | 32 ++++++++++++++++++++++++++++++++ requirements.txt | 5 +++++ 3 files changed, 39 insertions(+), 1 deletion(-) create mode 100644 README.md create mode 100644 requirements.txt diff --git a/.gitignore b/.gitignore index f5e96db..c7de1d1 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,2 @@ -venv \ No newline at end of file +venv +gestures.csv \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..79e12f8 --- /dev/null +++ b/README.md @@ -0,0 +1,32 @@ +# Sign Language Detection + +**A Work in Progress** + +This project aims to create a Machine Learning model that can translate Indian Sign Language to English text and act as a simple medium of communication for people unfamiliar with sign language. + +The hand recognition is done using [MediaPipe Hands solution](https://google.github.io/mediapipe/solutions/hands.html) in Python. + +Tutorials that I referred: +1. [Real-time Hand Gesture Recognition using TensorFlow & OpenCV](https://techvidvan.com/tutorials/hand-gesture-recognition-tensorflow-opencv/) +2. [Python: Hand landmark estimation with MediaPipe](https://techtutorialsx.com/2021/04/10/python-hand-landmark-estimation/) + +Currently, only dataset creation has been implemented ([save_gestures.py](save_gestures.py)) + +**Instructions to create dataset** + +1. Create virtual environment using + ```virtualenv``` and activate it. +2. Run: ```pip install -r requirements.txt``` +3. To just play around with the hand detection, run [hand_recognition.py](hand_recognition.py) +4. To start creating the dataset, run [save_gestures.py](save_gestures.py) +5. Press 'C' on your keyboard to start capturing the gesture. +6. Enter the name of the gesture in the terminal. +7. Raise your hand in front of the camera while making the gesture and it will automatically start capturing pixel coordinates of the landmarks that are being detected. +8. After number of datapoints recorded equals ``` TOTAL_DATAPOINTS```, code will stop capturing. +9. Press 'C' to start recording a new gesture or press 'Q' to terminate the program. + +## **To-Do** +--- +1. Study more about ISL and decide what changes need to be made. +2. Test out different machine learning models and architectures. +3. Work on deployment. \ No newline at end of file diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..ddfe1d6 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,5 @@ +mediapipe==0.8.9.1 +numpy==1.22.2 +opencv_python==4.5.5.62 +pandas==1.4.0 +tensorflow==2.8.0