American Sign Language Classification using CNNs: A Comparative Study
DOI:
https://doi.org/10.3126/injet.v1i2.66704Keywords:
American Sign Language, Deep Learning, Convolutional Neural Network, Transfer Learning, Image Classification, Image ProcessingAbstract
American Sign Language (ASL) classification is crucial in facilitating communication for individuals with hearing impairments. Traditional methods rely heavily on manual interpretation, which can be time-consuming and error-prone. Inspired by the success of deep learning techniques in image processing, the paper explores the application of Convolutional Neural Networks (CNNs) for ASL classification. The paper presents a CNN architecture tailored specifically for this task and investigates the effectiveness of transfer learning by leveraging four pre-trained models: VGG16, InceptionV3, ResNet50, and DenseNet121. A comparative analysis of these architectures has been presented in this paper. The experimental results show that the customized CNN model outperformed other models with a testing accuracy of 99.93% when provided with testing set images. Consequently, it is concluded that customized CNN outshines other models in accurately classifying sign languages.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.