A Deep Learning Approach for Human Facial Expression Recognition using Residual Network – 101

Authors

  • Ranjana Kumari Department ECE, Institute of technology, Mangalayatan University Aligarh, Beswan, Uttar Pradesh 202145, India
  • Javed Wasim Department of Computer Engineering & Applications, Institute of Engineering & Technology, Mangalayatan University, Beswan, Uttar Pradesh 202145, India

DOI:

https://doi.org/10.59796/jcst.V13N3.2023.2152

Keywords:

convolutional neural network, human emotion recognition, image resizing, noise removal, residual networks-101

Abstract

Emotion recognition is a dynamic process that focuses on a person's emotional state, which implies that the emotions associated with each individual's activities are unique. Human emotion analysis and recognition have been popular study areas among computer vision researchers. High dimensionality, execution time, and cost are the main difficulties in human emotion detection. To deal with these issues, the proposed model aims to design a human emotion recognition model using Residual Networks-101 (ResNet-101). A Convolutional Neural Network (CNN) design called ResNet-101 solves the vanishing gradient issue and makes it possible to build networks with thousands of convolutional layers that outperform networks with fewer layers. An image dataset was used for this emotion recognition. Then, this image dataset was subjected to preprocessing to resize the image and eliminate the noise contents present in the images. After preprocessing, the image was given to the classifier to recognize the emotions effectively. Here, ResNet-101 was used for the classification of six classes. The experimental results demonstrate that ResNet-101 models outperform the most recent techniques for emotion recognition. The proposed model was executed in MATLAB software and carried out several performance metrics. The proposed architecture attained better performance in terms of accuracy 92% and error with 0.08 and other performances like 92% of precision, 85% of specificity and 98% of sensitivity so on, and this shows the effectiveness of the proposed model to existing approaches such as LeNet, AlexNet and VGG. In comparison to current techniques, the suggested model provides improved recognition accuracy for low intensity or mild emotional expressions.

References

Abdullah, A. I. (2019). Facial Expression Identification System Using fisher linear discriminant analysis and K-Nearest Neighbor Methods. ZANCO Journal of Pure and Applied Sciences, 31(2), 9-13. https://doi.org/10.21271/ZJPAS.31.2.2

Agrawal, A., & Mittal, N. (2020). Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy. The Visual Computer, 36(2), 405-412. https://doi.org/10.1007/s00371-019-01630-9

Akhand, M. A. H., Roy, S., Siddique, N., Kamal, M. A. S., & Shimamura, T. (2021). Facial emotion recognition using transfer learning in the deep CNN. Electronics, 10(9), Article 1036. https://doi.org/10.3390/electronics10091036

Al Machot, F., Elmachot, A., Ali, M., Al Machot, E., & Kyamakya, K. (2019). A deep-learning model for subject-independent human emotion recognition using electrodermal activity sensors. Sensors, 19(7), Article 1659. https://doi.org/10.3390/s19071659

Alfakih, A., Yang, S., & Hu, T. (2020). Multi-view cooperative deep convolutional network for facial recognition with small samples learning: Distributed Computing and Artificial Intelligence [Conference presentation]. 16th International Conference (pp. 207-216). Springer International Publishing. https://doi.org/10.1007/978-3-030-23887-2_24

Alif, M. M. F., Syafeeza, A. R., Marzuki, P., & Alisa, A. N. (2018). Fused convolutional neural network for facial expression recognition. Proceedings of the Symposium on Electrical, Mechatronics and Applied Science (SEMA’18) (pp. 73-74).

Amrani, M., & Jiang, F. (2017). Deep feature extraction and combination for synthetic aperture radar target classification. Journal of Applied Remote Sensing, 11(4), 042616-042616. https://doi.org/10.1117/1.JRS.11.042616

Amrani, M., Bey, A., & Amamra, A. (2022). New SAR target recognition based on YOLO and very deep multi-canonical correlation analysis. International Journal of Remote Sensing, 43(15-16), 5800-5819. https://doi.org/10.1080/01431161.2021.1953719

Amrani, M., Jiang, F., Xu, Y., Liu, S., & Zhang, S. (2018). SAR-oriented visual saliency model and directed acyclic graph support vector metric based target classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(10), 3794-3810. https://doi.org/10.1109/JSTARS.2018.2866684

Amrani, M., Yang, K., Zhao, D., Fan, X., & Jiang, F. (2017, September 28-29). An efficient feature selection for SAR target classification [Conference presentation]. Advances in Multimedia Information Processing–PCM 2017: 18th Pacific-Rim Conference on Multimedia, Harbin, China, Revised Selected Papers, Part II. Springer, Cham.

Arabian, H., Wagner-Hartl, V., Chase, J. G., & Möller, K. (2021). Image Pre-processing Significance on Regions of Impact in a Trained Network for Facial Emotion Recognition. IFAC-Papers OnLine, 54(15), 299-303. https://doi.org/10.1016/j.ifacol.2021.10.272

ARES. (2013). Emotion Detection. Kaggle. Retrieved from https://www.kaggle.com/datasets/ananthu017/emotion-detection-fer

Bailly, K., & Dubuisson, S. (2017). Dynamic pose-robust facial expression recognition by multi-view pairwise conditional random forests. IEEE Transactions on Affective Computing, 10(2), 167-181. https://doi.org/10.1109/TAFFC.2017.2708106

Bharati, S., Podder, P., Mondal, M. R. H & Prasath, V. B. S. (2021) CO-ResNet: Optimized ResNet model for COVID-19 diagnosis from X-ray images. International Journal of Hybrid Intelligent Systems Preprint, 17(1-2), 1-15. https://doi.org/10.3233/HIS-210008

Bhatti, Y. K., Jamil, A., Nida, N., Yousaf, M. H., Viriri, S., & Velastin, S. A. (2021). Facial expression recognition of instructor using deep features and extreme learning machine. Computational Intelligence and Neuroscience, 2021, 1-17. https://doi.org/10.1155/2021/5570870

Chen, J. X., Zhang, P. W., Mao, Z. J., Huang, Y. F., Jiang, D. M., & Zhang, Y. N. (2019). Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks. IEEE Access, 7, 44317-44328. https://doi.org/10.1109/ACCESS.2019.2908285

Chen, L., Zhou, M., Su, W., Wu, M., She, J., & Hirota, K. (2018). Softmax regression based deep sparse autoencoder network for facial emotion recognition in human-robot interaction. Information Sciences, 428, 49-61. https://doi.org/10.1016/j.ins.2017.10.044

Dar, T., Javed, A., Bourouis, S., Hussein, H. S., & Alshazly, H. (2022). Efficient-SwishNet Based System for Facial Emotion Recognition. IEEE Access, 10, 71311-71328. https://doi.org/10.1109/ACCESS.2022.3188730

Do, L. N., Yang, H. J., Nguyen, H. D., Kim, S. H., Lee, G. S., & Na, I. S. (2021). Deep neural network-based fusion model for emotion recognition using visual data. The Journal of Supercomputing, 71, 10773–10790. https://doi.org/10.1007/s11227-021-03690-y

Gupta, V., Chopda, M. D., & Pachori, R. B. (2018). Cross-subject emotion recognition using flexible analytic wavelet transform from EEG signals. IEEE Sensors Journal, 19(6), 2266-2274. https://doi.org/10.1109/JSEN.2018.2883497

Hassan, M. M., Alam, M. G. R., Uddin, M. Z., Huda, S., Almogren, A., & Fortino, G. (2019). Human emotion recognition using deep belief network architecture. Information Fusion, 51, 10-18. https://doi.org/10.1016/j.inffus.2018.10.009

Hossain, M. S., & Muhammad, G. (2019). Emotion recognition using deep learning approach from audio–visual emotional big data. Information Fusion, 49, 69-78. https://doi.org/10.1016/j.inffus.2018.09.008

Jiang, D., Wu, K., Chen, D., Tu, G., Zhou, T., Garg, A., & Gao, L. (2020). A probability and integrated learning based classification algorithm for high-level human emotion recognition problems. Measurement, 150, Article 107049. https://doi.org/10.1016/j.measurement.2019.107049

Jung, H., Lee, S., Yim, J., Park, S., & Kim, J. (2015). Joint fine-tuning in deep neural networks for facial expression recognition. Proceedings of the IEEE international conference on computer vision (pp. 2983-2991). https://doi.org/10.1109/ICCV.2015.341

Kittipongdaja, P., & Siriborvornratanakul, T. (2022). Automatic kidney segmentation using 2.5 D ResUNet and 2.5 D DenseUNet for malignant potential analysis in complex renal cyst based on CT images. EURASIP Journal on Image and Video Processing, 2022(1), 1-15. https://doi.org/10.1186/s13640-022-00581-x

Kumar, M., & Kumar, A. (2017). Decision making behaviour of elderly tribal people at household and community level in rural eastern Uttar Pradesh, India. Asian Journal of Research in Social Sciences and Humanities, 7(7), 289-302. https://doi.org/10.5958/2249-7315.2017.00387.2

Li, B., & Lima, D. (2021). Facial expression recognition via ResNet-50. International Journal of Cognitive Computing in Engineering, 2, 57-64. https://doi.org/10.1016/j.ijcce.2021.02.002

Liu, B., Ye, C., Yang, P., Miao, Z., Liu, R., & Chen, Y. (2022, February). A Segmentation Model of Lung Parenchyma in Chest CT Based on ResUnet.[Conference presentation]. 14th International Conference on Machine Learning and Computing (ICMLC) (pp. 429-434). https://doi.org/10.1145/3529836.3529917

Lopes, A. T., De Aguiar, E., De Souza, A. F., & Oliveira-Santos, T. (2017). Facial expression recognition with convolutional neural networks: coping with few data and the training sample order. Pattern recognition, 61, 610-628. https://doi.org/10.1016/j.patcog.2016.07.026

Maheswari, D., & Radha, V. (2010). Noise removal in compound image using median filter. International Journal on Computer Science and Engineering, 2(04), 1359-1362.

Mehendale, N. (2020). Facial emotion recognition using convolutional neural networks (FERC). SN Applied Sciences, 2(3), Article 446. https://doi.org/10.1007/s42452-020-2234-1

Mohammed, S. B., & Abdulazeez, A. M. (2021). Deep Convolution Neural Network for Facial Expression Recognition. PalArch's Journal of Archaeology of Egypt/ Egyptology, 18(4), 3578-3586.

Nannapaneni, R., & Chatterjee, S. (2021). Human emotion recognition through facial expressions. Machine Intelligence and Smart Systems: Proceedings of MISS 2020 (pp. 513-525). Singapore: Springer. https://doi.org/10.1007/978-981-33-4893-6_44

Ngai, W. K., Xie, H., Zou, D., & Chou, K. L. (2022). Emotion recognition based on convolutional neural networks and heterogeneous bio-signal data sources. Information Fusion, 77, 107-117. https://doi.org/10.1016/j.inffus.2021.07.007

Pal, S., Mukhopadhyay, S., & Suryadevara, N. (2021). Development and progress in sensors and technologies for human emotion recognition. Sensors, 21(16), Article 5554. https://doi.org/10.3390/s21165554

Perumal, S., & Velmurugan, T. (2018). Preprocessing by contrast enhancement techniques for medical images. International Journal of Pure and Applied Mathematics, 118(18), 3681-3688.

Said, Y., & Barr, M. (2021). Human emotion recognition based on facial expressions via deep learning on high-resolution images. Multimedia Tools and Applications, 80(16), 25241-25253. https://doi.org/10.1007/s11042-021-10918-9

Salama, E. S., El-Khoribi, R. A., Shoman, M. E., & Shalaby, M. A. W. (2021). A 3D-convolutional neural network framework with ensemble learning techniques for multi-modal emotion recognition. Egyptian Informatics Journal, 22(2), 167-176. https://doi.org/10.1016/j.eij.2020.07.005

Shin, M., Kim, M., & Kwon, D. S. (2016, August). Baseline CNN structure analysis for facial expression recognition. [Conference Presentation]. 25th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 724-729) IEEE. https://doi.org/10.1109/ROMAN.2016.7745199

Tzirakis, P., Trigeorgis, G., Nicolaou, M. A., Schuller, B. W., & Zafeiriou, S. (2017). End-to-end multimodal emotion recognition using deep neural networks. IEEE Journal of selected topics in signal processing, 11(8), 1301-1309. https://doi.org/10.1109/JSTSP.2017.2764438

Wang, S. H., Phillips, P., Dong, Z. C., & Zhang, Y. D. (2018). Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm. Neurocomputing, 272, 668-676. https://doi.org/10.1016/j.neucom.2017.08.015

Wattana, A., Janpong, S., & Supichayanggoon, Y. (2018). Downdraft gasifier identification via neural networks. Journal of Current Science and Technology, 2(8), 87-98. https://doi.org/10.1016/S0893-6080(18)30023-6

Zhang, T., Zheng, W., Cui, Z., Zong, Y., & Li, Y. (2018). Spatial–temporal recurrent neural network for emotion recognition. IEEE transactions on cybernetics, 49(3), 839-847. https://doi.org/10.1109/TCYB.2017.2788081

Zhang, T., Zheng, W., Cui, Z., Zong, Y., Yan, J., & Yan, K. (2016). A deep neural network-driven feature learning method for multi-view facial expression recognition. IEEE Transactions on Multimedia, 18(12), 2528-2536. https://doi.org/10.1109/TMM.2016.2598092

Downloads

Published

2023-08-30

How to Cite

Kumari, R., & Wasim, J. (2023). A Deep Learning Approach for Human Facial Expression Recognition using Residual Network – 101. Journal of Current Science and Technology, 13(3), 517–532. https://doi.org/10.59796/jcst.V13N3.2023.2152

Issue

Section

Research Article