An Application of K-Means and Cross-Correlation Techniques for Facial Emotion Recognition

Main Article Content

Suchart Yammen
Phantida Limsripraphan

Abstract

This research proposes a high-performance, explainable, and person-specific architecture for Facial Emotion Recognition (FER) that addresses the computational complexity limitations commonly found in deep learning models. The proposed methodology is based on digital signal processing and leverages K-Means clustering to extract template vector features from critical areas of change. A matched filter set under a two-stage structure is then applied for emotion classification. Experiments conducted on the JAFFE dataset using cross-validation for targeted individuals demonstrate that the proposed architecture can perfectly classify all seven basic emotions with an accuracy of 100%. These results highlight the potential of the proposed approach in building highly accurate, lightweight, and user-adaptive emotion recognition systems.

Article Details

How to Cite
[1]
S. . Yammen and P. Limsripraphan, “An Application of K-Means and Cross-Correlation Techniques for Facial Emotion Recognition”, TEEJ, vol. 6, no. 1, pp. 16–22, Jan. 2026.
Section
Research article

References

M. Karnati, A. Seal, A. Seal, O. Krejcar, O. Krejcar, and A. Yazidi, “FER-net: facial expression recognition using deep neural net,” Neural Computing and Applications, vol. 33, no. 15, pp. 9125–9136, Jan. 2021, doi: 10.1007/S00521-020-05676-Y.

C. Liu, K. Hirota, J. Ma, Z. Jia, and Y. Dai, “Facial Expression Recognition Using Hybrid Features of Pixel and Geometry,” IEEE Access, vol. 9, pp. 18876–18889, Jan. 2021, doi: 10.1109/ACCESS.2021.3054332.

O. Ekundayo and S. Viriri, “Facial Expression Recognition: A Review of Trends and Techniques,” IEEE Access, vol. 9, pp. 136944–136973, Sep. 2021, doi: 10.1109/ACCESS.2021.3113464.

P. Jiang, B. Wan, Q. Wang, and J. Wu, “Fast and Efficient Facial Expression Recognition Using a Gabor Convolutional Network,” IEEE Signal Processing Letters, vol. 27, pp. 1954–1958, Oct. 2020, doi: 10.1109/LSP.2020.3031504.

J. Kommineni, S. Mandala, M. S. Sunar, and P. M. Chakravarthy, “Accurate computing of facial expression recognition using a hybrid feature extraction technique,” The Journal of Supercomputing, vol. 77, no. 5, pp. 5019–5044, May 2021, doi: 10.1007/S11227-020-03468-8.

“Deep Facial Expression Recognition: A Survey,” IEEE Transactions on Affective Computing, vol. 13, no. 3, pp. 1195–1215, Jul. 2022, doi: 10.1109/taffc.2020.2981446.

A. Khan, “Facial Emotion Recognition Using Conventional Machine Learning and Deep Learning Methods: Current Achievements, Analysis and Remaining Challenges,” Information, vol. 13, no. 6, p. 268, May 2022, doi: 10.3390/info13060268.

“Understanding Deep Learning Techniques for Recognition of Human Emotions Using Facial Expressions: A Comprehensive Survey,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–31, Jan. 2023, doi: 10.1109/tim.2023.3243661.

S. Saurav et al., “Dual integrated convolutional neural network for real-time facial expression recognition in the wild,” The Visual Computer, pp. 1–14, Feb. 2021, doi: 10.1007/S00371-021-02069-7.

Z. Song, “Facial Expression Emotion Recognition Model Integrating Philosophy and Machine Learning Theory,” Frontiers in Psychology, vol. Volume 12-2021, 2021, doi: 10.3389/fpsyg.2021.759485.

M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205, Apr. 1998, doi: 10.1109/AFGR.1998.670949.

A. A. Kandeel, M. Rahmanian, F. Zulkernine, H. M. Abbas, and H. S. Hassanein, “Facial Expression Recognition Using a Simplified Convolutional Neural Network Model,” International Conference on Communications, pp. 1–6, Mar. 2021, doi: 10.1109/ICCSPA49915.2021.9385739.

M. Arora and M. Kumar, “AutoFER: PCA and PSO based automatic facial emotion recognition,” Multimedia Tools and Applications, vol. 80, no. 2, pp. 3039–3049, Jan. 2021, doi: 10.1007/S11042-020-09726-4.

S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 4, pp. 376–380, Apr. 1991, doi: 10.1109/34.88573.

P. Ekman, W. V. Friesen, and J. C. Hager, “Facial Action Coding System. Manual and Investigator’s Guide,” 2002.

X. Zhang, Y. He, Y. Jin, H. Qin, M. Azhar, and J. Z. Huang, “A Robust k-Means Clustering Algorithm Based on Observation Point Mechanism,” Complexity, vol. 2020, pp. 1–11, Mar. 2020, doi: 10.1155/2020/3650926.

X. Li and H. Tan, “K-Means Algorithm Based on Initial Cluster Center Optimization,” Springer, Cham, 2020, pp. 310–316. doi: 10.1007/978-3-030-43306-2_44.

S. Jayaraman and A. Mahendran, “CNN-LSTM based emotion recognition using Chebyshev moment and K-fold validation with multi-library SVM,” PLOS ONE, vol. 20, no. 4, p. e0320058, 2025, doi: 10.1371/journal.pone.0320058.

S. Yammen and W. Limsripraphan, “Matched Filter Detector for Textile Fiber Classification of Signals with Near-Infrared Spectrum,” Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp. 501–505, Nov. 2022, doi: 10.23919/APSIPAASC55919.2022.9980054.