CNN-BiLSTM based Facial Emotion Recognition
DOI:
https://doi.org/10.3126/injet.v2i1.72579Keywords:
Hybrid Neural Network, CNN-BiLSTM, Image Processing, Face emotion Recognition, FER2013 DatasetAbstract
Human emotions play an important role as they let articulate themselves without any words. Emotion Recognition involve a lot of information about facial expression, body language, tone, and pitch of voice etc. Among all this, Facial expression also plays an important role in interaction between humans and machines and significant amount of research had been done in face emotion recognition. Traditionally feature extraction from the image were done manually but over some past years different Machine Learning (ML) algorithms and Neural Network (NN) had been used for face emotion recognition. In this paper, Hybrid Neural Network CNN-Bi LSTM is used to extract the feature from facial image and detect emotion. To proceed this publicly available FER2013 dataset is used. The model is trained with greyscale image to classify seven emotions such as happy, sad, disgust, angry, fearful, surprise and neutral. The CNN component extracts spatial features, while the BiLSTM layer processes these features to capture temporal dependencies. The model achieves an accuracy of 79.4% when classifying all seven different emotions. However, when limited to detecting three emotions (happy, sad, neutral), the accuracy improves to 89.0%, demonstrating the model’s potential for focused emotion recognition tasks.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 International Journal on Engineering Technology
This work is licensed under a Creative Commons Attribution 4.0 International License.
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.