The purpose of this project is to generate the ability to detect sign language gestures from still images and translate/transcribe them into an understandable format for the public using AI/ML techniques. This tool would be applicable to multiple domains, including business and education industries. The motivation lies behind the general lack of resources in today’s world regarding the translation of sign language and the potential behind using AI to automate such an issue.
- Define and understand a business problem
- Find the necessary data to be used
- Perform necessary cleaning operations and transformations to make data useable
- Produce models and analyze results based off needs
- Determine future steps to maximize accuracy further
pandas
numpy
matplotlib.pyplot
seaborn
sklearn.metrics
sklearn.preprocessing
keras.utils.np_utils
keras.models
keras.layers
keras.preprocessing.image
keras.callbacks
tensorflow.keras.optimizers
The main deliverables for this project are located in the Exploratory Data Analysis notebook. To access the data used for this project click on the data folder and then the zipped data folder, all of the data provided for this project is located in this folder.