Skip to content

chetan0220/Sign-Language-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Sign Language Detection

General Info

The project converts the American hand sign gesture image to text. It uses Convolutional Neural Networks to process images with the help of Transfer Learning. The pre-trained model used here is VGG16.
About VGG16
VGG16 refers to the VGG model, also called VGGNet. It is a convolution neural network (CNN) model supporting 16 layers. K. Simonyan and A. Zisserman from Oxford University proposed this model. The VGG16 model can achieve a test accuracy of 92.7% in ImageNet, a dataset containing more than 14 million training images across 1000 object classes. It is one of the top models from the ILSVRC-2014 competition.

Demo

  1. Jupyter Notebook
    prediction
  2. Streamlit
    Watch video demo here
    sld-demo.webm

Dataset

The dataset encompasses all the English alphabet letters, providing extensive coverage for American Sign Language (ASL) gestures. Size of training dataset is 12875. Size of testing dataset is 4268. Images are of the size (310, 310).
Dataset Source Dataset

Technologies and Tools

Python(^3)
Numpy
Pandas
Matplotlib
Pillow
Tensorflow
Keras
OpenCV
Streamlit

Streamlit Setup for localhost

pip install streamlit
cd app
streamlit run app.py

Conclusion

The model was able to predict with test accuracy of 96.37%. The test loss is 1.511


Read the Blog Here

If you have any query, feedback or suggestion feel free to drop a mail at [email protected] :)

Releases

No releases published

Packages

No packages published