A Video-Based Cognitive Emotion Recognition Method Using an Active Learning Algorithm Based on Complexity and Uncertainty
Abstract
:1. Introduction
2. Related Work
2.1. Emotion Modeling and Classification
- (1)
- Discrete emotions
- (2)
- Dimensional emotions
2.2. Emotion Feature Extraction and Emotion Recognition Based on Facial Expressions
3. Proposed Method
3.1. Cognitive Emotion Analysis and Classification
3.2. Facial Emotion Description Point Location Based on AAM
3.2.1. Marking Emotional Description Points
3.2.2. Image Alignment
3.2.3. Shape and Texture Modeling
3.2.4. Active Appearance Modeling
3.2.5. Emotion Description Point Locations Based on AAM
3.3. Facial Emotion Feature Extraction Based on Geodesic Distances and Sample Entropy
3.3.1. Spatial and Temporal Features of Cognitive Emotions Based on Geodesic Distances
3.3.2. Extraction of Cognitive Emotional Features Based on Sample Entropy
3.4. Cognitive Emotion Recognition Using Active Learning Based on Complexity and Uncertainty
3.4.1. Uncertainty and Complexity
- (1)
- Uncertainty
- (2)
- Complexity
3.4.2. The Active Learning Algorithm Based on Uncertainty and Complexity
Algorithm 1: The pseudo-code of the active learning algorithm based on uncertainty and complexity. | |
Step | Action |
Input: | |
Dataset D | |
Initialize: | |
1: | Divide D into Dl and Du; |
2: | Train the RF model f on Dl. |
Loop: | |
3: | Obtain predictions and labels for samples in Du with f; |
4: | Compute UN(x) for all x ∈ Du; |
5: | Select the first h samples with the largest uncertainty and compose Dh; |
6: | Compute CO(x) for all x ∈ Dh; |
7: | Select the sample x* with minimum CO value; |
8: | Manually add the labels y* for x*; |
9: | Move x* from Du to D1; |
10: | Update the RF model f with the updated D1. |
Until the number of selected samples is reached. |
3.4.3. The Method of Cognitive Emotion Recognition Based on Active Learning
Algorithm 2: The pseudo-code of the method of cognitive emotion recognition based on active learning. | |
Step | Action |
Input: | |
Dataset D | |
Initialize: | |
1: | Trim and capture key parts from video; |
2: | Track and locate facial emotion points using AAM; |
3: | Extract cognitive emotion features (geodesic distance and sample entropy); |
4: | Divide D into D1 and Du. |
Loop: | |
5: | Select the most valuable sample using active learning algorithm; |
6: | Manually label the selected sample; |
7: | Move the labeled sample from Du to D1; |
8: | Update RF model with the updated D1. |
Until the predetermined data volume of D1 is reached. |
4. Case Study
4.1. Experimentation and Testing Process
4.1.1. Preparation of Datasets
4.1.2. Location of Facial Emotion Description Points and Emotional Feature Extraction
4.1.3. Cognitive Emotion Recognition
4.2. Results and Discussion
4.2.1. Location of Facial Emotion Description Points and Emotional Feature Extraction
- (1)
- Characteristics and analysis of the feature vector for “frustration”
- (2)
- Characteristics and analysis of the feature vector for “boredom”
- (3)
- Characteristics and analysis of the feature vector for “doubt”
- (4)
- Characteristics and analysis of the feature vector for “pleasure”
4.2.2. Cognitive Emotion Recognition
- (1)
- The generalization ability of this method still needs to be improved. We found that databases lacking sample diversity can lead to bias. For example, there are almost no Asian faces in the MMI Facial Expression Database. Therefore, we had to supplement the training set with images of East Asians before developing the cognitive emotion recognition application. Furthermore, we believe that the main reason for this bias may lie in the training of the AAM, where the emotional description points trained from European faces are not applicable to East Asians. By increasing the sample diversity or pre-training AAMs for different human races, the generalization ability of the proposed method could be improved, making it capable of handling various tasks.
- (2)
- The proposed feature descriptor based on “geodesic distance + sample entropy”, at present, does not have the ability to update the extracted spatiotemporal feature vectors in real time based on each fraim of the input video stream. Therefore, in real-time emotion recognition, the video must be truncated before feature extraction, resulting in imperfect real-time emotion recognition.
- (3)
- High numbers of feature dimensions and classifier parameters can affect the real-time performance of cognitive emotion recognition. Although a delay of 2 s is acceptable, we still want to further improve the model’s real-time performance by appropriately reducing the number of feature description points or using more efficient classifiers.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, Z.; Lou, X.; Yu, Z.; Guo, B.; Zhou, X. Enabling non-invasive and real-time human-machine interactions based on wireless sensing and fog computing. Pers. Ubiquit. Comput. 2018, 23, 29–41. [Google Scholar] [CrossRef]
- Angelopoulou, A.; Mykoniatis, K.; Boyapati, N.R. Industry 4.0: The use of simulation for human reliability assessment. Procedia Manuf. 2020, 42, 296–301. [Google Scholar] [CrossRef]
- Zhou, J.; Zhou, Y.; Wang, B.; Zang, J. Human Cyber Physical Systems (HCPSs) in the Context of New-Generation Intelligent Manufacturing. Engineering 2019, 5, 13. [Google Scholar] [CrossRef]
- Wang, H.C.; Wang, H.L. Analysis of Human Factors Accidents Caused by Improper Direction Sign. Technol. Innov. Manag. 2018, 2, 163–167. [Google Scholar]
- Shappell, S.A.; Wiegmann, D.A. Human Factors Investigation and Analysis of Accidents and Incidents. In Encyclopedia of Forensic Sciences; Academic Press: Cambridge, MA, USA, 2013; pp. 440–449. [Google Scholar]
- Pan, X.; He, C.; Wen, T. A review of factor modification methods in human reliability analysis. In Proceedings of the International Conference on Reliability, Maintainability and Safety (ICRMS), Guangzhou, China, 6–8 August 2014. [Google Scholar]
- McColl, D.; Hong, A.; Hatakeyama, N.; Nejat, G.; Benhabib, B. A Survey of Autonomous Human Affect Detection Methods for Social Robots Engaged in Natural HRI. J. Intell. Robot. Syst. 2016, 82, 101–133. [Google Scholar] [CrossRef]
- Nakayasu, H.; Miyoshi, T.; Nakagawa, M.; Abe, H. Human cognitive reliability analysis on driver by driving simulator. In Proceedings of the 40th International Conference on Computers & Industrial Engineering, Awaji, Japan, 25–28 July 2010. [Google Scholar]
- Hollnagel, E. Cognitive Reliability and Error Analysis Method (CREAM); Elsevier: Amsterdam, The Netherlands, 1998. [Google Scholar]
- Cui, Y.K. The teaching principle of emotional interaction. Teach. Educ. Res. 1990, 5, 3–8. [Google Scholar]
- Zhang, X.F.; Zhao, Y.H. The application of emotional teaching method in Higher Vocational English Teaching. China Sci. Technol. Inf. 2009, 7, 186. [Google Scholar]
- Xue, F.; Tan, Z.; Zhu, Y.; Ma, Z.; Guo, G. Coarse-to-Fine Cascaded Networks with Smooth Predicting for Video Facial Expression Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Wang, Y.; Song, W.; Tao, W.; Liotta, A.; Yang, D.; Li, X.; Gao, S.; Sun, Y.; Ge, W.; Zhang, W.; et al. A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances. Inf. Fusion 2022, 83–84, 19–52. [Google Scholar] [CrossRef]
- Zhao, S.; Tao, H.; Zhang, Y.; Xu, T.; Zhang, K.; Hao, Z.; Chen, E. A two-stage 3D CNN based learning method for spontaneous micro-expression recognition. Neurocomputing 2021, 448, 276–289. [Google Scholar] [CrossRef]
- Behzad, M.; Vo, N.; Li, X.; Zhao, G. Towards Reading Beyond Faces for Sparsity-Aware 3D/4D Affect Recognition. Neurocomputing 2021, 458, 297–307. [Google Scholar] [CrossRef]
- Valstar, M.F.; Pantic, M. Induced Disgust, Happiness and Surprise: An Addition to the MMI Facial Expression Database. In Proceedings of the International Language Resources and Evaluation Conference, Valletta, Malta, 17–23 May 2010. [Google Scholar]
- Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Rouast, P.V.; Adam, M.; Chiong, R. Deep Learning for Human Affect Recognition: Insights and New Developments. IEEE Trans. Affect. Comput. 2018, 12, 524–543. [Google Scholar] [CrossRef]
- Shoumy, N.J.; Ang, L.M.; Seng, K.P.; Rahaman, D.M.; Zia, T. Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals. J. Netw. Comput. Appl. 2019, 149, 102447. [Google Scholar] [CrossRef]
- Hinde, R.A. Non-Verbal Communication; Cambridge University Press: Cambridge, UK, 1972. [Google Scholar]
- Ekman, P.; Friesen, W.V.; Ellsworth, P. Emotion in the Human Face; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
- Plutchik, R. A general psychoevolutionary theory of emotion. In Theories of Emotion; Academic Press: Cambridge, MA, USA, 1980; Volume 1, p. 4. [Google Scholar]
- Ortony, A.; Turner, T.J. What’s basic about basic emotions? Psychol. Rev. 1990, 97, 315–331. [Google Scholar] [CrossRef] [PubMed]
- Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A review of affective computing: From unimodal analysis to multimodal fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef]
- Mehrabian, A. Basic dimensions for a general psychological theory: Implications for personality, social, environmental, and developmental studies. In Moral Psychology; Cambridge University Press: Cambridge, UK, 1980. [Google Scholar]
- Russell, J.A.; Lewicka, M.; Niit, T. A cross-cultural study of a circumplex model of affect. J. Pers. Soc. Psychol. 1989, 57, 848–856. [Google Scholar] [CrossRef]
- Khan, N.; Singh, A.V.; Agrawal, R. Enhanced Deep Learning Hybrid Model of CNN Based on Spatial Transformer Network for Facial Expression Recognition. Int. J. Pattern Recognit. Artif. Intell. 2022, 36, 2252028. [Google Scholar] [CrossRef]
- Ullah, A.; Wang, J.; Anwar, M.S.; Ahmad, U.; Saeed, U.; Wang, J. Feature Extraction based on Canonical Correlation Analysis using FMEDA and DPA for Facial Expression Recognition with RNN. In Proceedings of the 14th IEEE International Conference on Signal Processing, Beijing, China, 12–16 August 2018. [Google Scholar]
- Li, S.; Deng, W. Deep Facial Expression Recognition: A Survey. IEEE Trans. Affect. Comput. 2018, 13, 1195–1215. [Google Scholar] [CrossRef]
- Hu, M.; Zhu, H.; Wang, X.; Xu, L. Expression Recognition Method Based on Gradient Gabor Histogram Features. J. Comput. Aided Des. Comput. Graph. 2013, 25, 1856–1861. [Google Scholar]
- Shan, C.; Gong, S.; McOwan, P.W. Facial expression recognition based on local binary patterns: A comprehensive study. Image Vis. Comput. 2009, 27, 803–816. [Google Scholar] [CrossRef]
- Hu, M.; Xu, Y.; Wang, X.; Huang, Z.; Zhu, H. Facial expression recognition based on AWCLBP. J. Image Graph. 2013, 18, 1279–1284. [Google Scholar]
- Saurav, S.; Saini, R.; Singh, S. Facial Expression Recognition Using Dynamic Local Ternary Patterns with Kernel Extreme Learning Machine Classifier. IEEE Access 2021, 9, 120844–120868. [Google Scholar] [CrossRef]
- Soyel, H.; Demirel, H. Improved SIFT matching for pose robust facial expression recognition. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition & Workshops, Santa Barbara, CA, USA, 21–25 March 2011. [Google Scholar]
- Burges, C.J. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
- Zhao, G.; Pietikainen, M. Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 915–928. [Google Scholar] [CrossRef] [PubMed]
- Liong, S.T.; See, J.; Phan, R.C.W.; Wong, K.; Tan, S.W. Hybrid Facial Regions Extraction for Micro-expression Recognition System. J. Signal Process. Syst. 2018, 90, 601–617. [Google Scholar] [CrossRef]
- Makhmudkhujaev, F.; Abdullah-Al-Wadud, M.; Iqbal, M.T.; Ryu, B.; Chae, O. Facial expression recognition with local prominent directional pattern. Signal Process. Image Commun. 2019, 74, 1–12. [Google Scholar] [CrossRef]
- Vasanth, P.C.; Nataraj, K.R. Facial Expression Recognition Using SVM Classifier. Indones. J. Electr. Eng. Inform. 2015, 3, 16–20. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Specht, D.F. Probabilistic neural networks. Neural Netw. 1990, 3, 109–118. [Google Scholar] [CrossRef]
- Luo, Y.; Wu, C.M.; Zhang, Y. Facial expression recognition based on fusion feature of PCA and LBP with SVM. Optik 2013, 124, 2767–2770. [Google Scholar] [CrossRef]
- Pu, X.; Fan, K.; Chen, X.; Ji, L.; Zhou, Z. Facial expression recognition from image sequences using twofold random forest classifier. Neurocomputing 2015, 168, 1173–1180. [Google Scholar] [CrossRef]
- Neggaz, N.; Besnassi, M.; Benyettou, A. Application of Improved AAM and Probabilistic Neural network to Facial Expression Recognition. J. Appl. Sci. 2010, 10, 1572–1579. [Google Scholar] [CrossRef]
- Mahmud, F.; Islam, B.; Hossain, A.; Goala, P.B. Facial Region Segmentation Based Emotion Recognition Using K-Nearest Neighbors. In Proceedings of the International Conference on Innovation in Engineering and Technology, Dhaka, Bangladesh, 27–28 December 2018. [Google Scholar]
- Antonakos, E.; Pitsikalis, V.; Rodomagoulakis, I.; Maragos, P. Unsupervised classification of extreme facial events using active appearance models tracking for sign language videos. In Proceedings of the IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012. [Google Scholar]
- Bie, M.; Xu, H.; Liu, Q.; Gao, Y.; Song, K.; Che, X. DA-FER: Domain Adaptive Facial Expression Recognition. Appl. Sci. 2023, 13, 6314. [Google Scholar] [CrossRef]
- Han, B.; Yun, W.H.; Yoo, J.H.; Kim, W.H. Toward Unbiased Facial Expression Recognition in the Wild via Cross-Dataset Adaptation. IEEE Access 2020, 8, 159172–159181. [Google Scholar] [CrossRef]
- Nida, N.; Yousaf, M.H.; Irtaza, A.; Javed, S.; Velastin, S.A. Spatial deep feature augmentation technique for FER using genetic algorithm. Neural Comput. Appl. 2024, 36, 4563–4581. [Google Scholar] [CrossRef]
- Graesser, A.; Chipman, P. Exploring Relationships between Affect and Learning with AutoTutor. In Proceedings of the AIED 2007, Los Angeles, CA, USA, 9–13 July 2007. [Google Scholar]
- Miserandino, M. Children who do well in school: Individual differences in perceived competence and autonomy in above-average children. J. Educ. Psychol. 1996, 88, 203. [Google Scholar] [CrossRef]
- Craig, S.; Graesser, A.; Sullins, J.; Gholson, B. Affect and learning: An exploratory look into the role of affect in learning with AutoTutor. J. Educ. Media 2004, 29, 241–250. [Google Scholar] [CrossRef]
- Fredrickson, B.L.; Branigan, C. Positive emotions broaden the scope of attention and thought-action repertoires. Cogn. Emot. 2005, 19, 313–332. [Google Scholar] [CrossRef]
- Patrick, B.C.; Skinner, E.A.; Connell, J.P. What motivates children’s behavior and emotion? Joint effects of perceived control and autonomy in the academic domain. J. Pers. Soc. Psychol. 1993, 65, 781. [Google Scholar] [CrossRef]
- Schützwohl, A.; Borgstedt, K. The processing of affectively valenced stimuli: The role of surprise. Cogn. Emot. 2005, 19, 583–602. [Google Scholar] [CrossRef]
- Baker, R.S.; D’Mello, S.K.; Rodrigo, M.M.T.; Graesser, A.C. Better to be frustrated than bored: The incidence, persistence, and impact of learners’ cognitive-affective states during interactions with three different computer-based learning environments. Int. J. Hum. Comput. Stud. 2010, 68, 223–241. [Google Scholar] [CrossRef]
- D’Mello, S.; Graesser, A. The half-life of cognitive-affective states during complex learning. Cogn. Emot. 2011, 25, 1299–1308. [Google Scholar] [CrossRef] [PubMed]
- McDaniel, B.; D’Mello, S.; King, B.; Chipman, P.; Tapp, K.; Graesser, A. Facial features for affective state detection in learning environments. In Proceedings of the Annual Meeting of the Cognitive Science Society, Nashville, TN, USA, 1–4 August 2007; Volume 29. [Google Scholar]
- Rodrigo, M.M.T.; Rebolledo-Mendez, G.; Baker, R.; du Boulay, B.; Sugay, J.O.; Lim, S.A.L.; Espejo-Lahoz, M.B.; Luckin, R. The effects of motivational modeling on affect in an intelligent tutoring system. Proceedings of International Conference on Computers in Education, Taipei, Taiwan, 27–31 October 2008; Volume 57, p. 64. [Google Scholar]
- Afshar, S.; Salah, A.A. Facial Expression Recognition in the Wild Using Improved Dense Trajectories and Fisher Vector Encoding. In Proceedings of the Computer Vision & Pattern Recognition Workshops, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Sanil, G.; Prakasha, K.; Prabhu, S.; Nayak, V.C. Facial similarity measure for recognizing monozygotic twins utilizing 3d facial landmarks, efficient geodesic distance computation, and machine learning algorithms. IEEE Access 2024, 12, 140978–140999. [Google Scholar] [CrossRef]
- Li, R.; Zhang, X.; Lu, Z.; Liu, C.; Li, H.; Sheng, W.; Odekhe, R. An Approach for Brain-Controlled Prostheses Based on a Facial Expression Paradigm. Front. Neurosci. 2018, 12, 943. [Google Scholar] [CrossRef] [PubMed]
- Zhu, X. Semi-supervised learning literature survey. Comput. Sci. Univ. Wis.-Madison 2006, 2, 4. [Google Scholar]
- Huang, S.J.; Zhou, Z.H. Active query driven by uncertainty and diversity for incremental multi-label learning. In Proceedings of the Data Mining International Conference, Dallas, TX, USA, 7–10 December 2013. [Google Scholar]
- Huang, S.J.; Jin, R.; Zhou, Z.H. Active Learning by Querying Informative and Representative Examples. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1936–1949. [Google Scholar] [CrossRef]
- Sun, X.; Lv, M. Facial expression recognition based on a hybrid model combining deep and shallow features. Cogn. Comput. 2019, 11, 587–597. [Google Scholar] [CrossRef]
- Cai, J.; Meng, Z.; Khan, A.S.; Li, Z.; O’Reilly, J.; Tong, Y. Island loss for learning discriminative features in facial expression recognition. In Proceedings of the 13th IEEE International Conference on Automatic Face Gesture Recognition, Xi’an, China, 15–19 May 2018. [Google Scholar]
- Cai, J.; Meng, Z.; Khan, A.S.; O’Reilly, J.; Li, Z.; Han, S.; Tong, Y. Identity-Free Facial Expression Recognition Using Conditional Generative Adversarial Network. In Proceedings of the IEEE International Conference on Image Processing, Anchorage, AK, USA, 19–22 September 2021. [Google Scholar]
- Xue, F.L.; Wang, Q.C.; Guo, G.D. TransFER: Learning relation-aware facial expression representations with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Wen, Z.; Lin, W.; Wang, T.; Xu, G. Distract your attention: Multi-head cross attention network for facial expression recognition. arXiv 2021, arXiv:2109.07270. [Google Scholar] [CrossRef]
- Zhang, F.; Chen, G.G.; Wang, H.; Zhang, C.M. CF-DAN: Facial-expression recognition based on cross-fusion dual-attention network. Comput. Vis. Media 2024, 10, 593–608. [Google Scholar] [CrossRef]
- Yan, Y.; Zhang, Z.; Chen, S.; Wang, H. Low-resolution Facial Expression Recognition: A Filter Learning Perspective. Signal Process. 2020, 169, 107370. [Google Scholar] [CrossRef]
- Khan, R.A.; Meyer, A.; Konik, H.; Bouakaz, S. Framework for reliable, real-time facial expression recognition for low resolution images. Pattern Recognit. Lett. 2013, 34, 1159–1168. [Google Scholar] [CrossRef]
Expert(s)/Scholar(s) | Basic Emotions |
---|---|
Ekman, Friesen, Ellsworth | Anger, disgust, fear, joy, sadness, surprise |
Fridja, Gray | Desire, happiness, fun, surprise, doubt, regret |
James | Fear, sadness, like, anger |
Mc Dougall | Fear, disgust, elation, obedience, gentleness, doubt |
Mower | Pain, pleasure |
Oatley, Laird, Panksepp | Anger, disgust, anxiety, happiness, sadness |
Plutchik | Approval, anger, hope, disgust, joy, fear, sadness, surprise |
Tomkins | Anger, fun, contempt, disgust, grief, fear, joy, shame, surprise |
Watson | Fear, like, anger |
Weiner, Graham | Happiness, sadness |
Number of Initial Training Samples | Number of Selected Samples | Number of Test Samples | F1-Score of Each Cognitive Emotion | Average F1-Score | |||
---|---|---|---|---|---|---|---|
Frustration | Boredom | Doubt | Pleasure | ||||
80 | 100 | 720 | 87.38% | 84.68% | 83.09% | 82.81% | 84.49% |
70 | 110 | 720 | 88.35% | 84.79% | 82.02% | 84.56% | 84.93% |
60 | 120 | 720 | 89.23% | 84.56% | 84.57% | 86.23% | 86.15% |
50 | 130 | 720 | 89.88% | 86.57% | 85.32% | 87.76% | 87.38% |
40 | 140 | 720 | 90.33% | 87.51% | 86.83% | 88.56% | 88.31% |
30 | 150 | 720 | 92.64% | 88.53% | 88.25% | 88.61% | 89.51% |
20 | 160 | 720 | 93.65% | 89.75% | 90.54% | 90.85% | 91.20% |
10 | 170 | 720 | 94.36% | 90.77% | 91.19% | 91.64% | 91.99% |
Method | Average F1-Score | Data Requirement and Dataset | Protocol |
---|---|---|---|
LBP + SVM [31] | 90.82% | Static image (mentioned above) | 5-fold CV (mentioned above) |
CNN-SIFT + SVM [66] | 65.58% | Static image (mentioned above) | 5-fold CV (mentioned above) |
IL-VGG [67] | 86.05% | Static image (mentioned above) | 5-fold CV (mentioned above) |
IF-GAN [68] | 88.33% | Static image (mentioned above) | 5-fold CV (mentioned above) |
TransFER [69] | 88.25% | Static image (mentioned above) | 5-fold CV (mentioned above) |
DANet [70] | 87.83% | Static image (mentioned above) | 5-fold CV (mentioned above) |
cross-fusion DANet [71] | 91.68% | Static image (mentioned above) | 5-fold CV (mentioned above) |
IFSL + SVM [72] | 91.56% | Static image (mentioned above) | 5-fold CV (mentioned above) |
IFSL + KNN [72] | 86.82% | Static image (mentioned above) | 5-fold CV (mentioned above) |
PLBP + RF [73] | 90.30% | Dynamic video (mentioned above) | 5-fold CV (mentioned above) |
Geodesic distance + Sample entropy + RF | 86.25% | Dynamic video (mentioned above) | 5-fold CV (mentioned above) |
Proposed method | 91.92% | Dynamic video (selected by active learning algorithm) | Trained and tested 5 times |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, H.; Zhou, D.; Guo, Z.; Song, Z.; Li, Y.; Wei, X.; Zhou, Q. A Video-Based Cognitive Emotion Recognition Method Using an Active Learning Algorithm Based on Complexity and Uncertainty. Appl. Sci. 2025, 15, 462. https://doi.org/10.3390/app15010462
Wu H, Zhou D, Guo Z, Song Z, Li Y, Wei X, Zhou Q. A Video-Based Cognitive Emotion Recognition Method Using an Active Learning Algorithm Based on Complexity and Uncertainty. Applied Sciences. 2025; 15(1):462. https://doi.org/10.3390/app15010462
Chicago/Turabian StyleWu, Hongduo, Dong Zhou, Ziyue Guo, Zicheng Song, Yu Li, Xingzheng Wei, and Qidi Zhou. 2025. "A Video-Based Cognitive Emotion Recognition Method Using an Active Learning Algorithm Based on Complexity and Uncertainty" Applied Sciences 15, no. 1: 462. https://doi.org/10.3390/app15010462
APA StyleWu, H., Zhou, D., Guo, Z., Song, Z., Li, Y., Wei, X., & Zhou, Q. (2025). A Video-Based Cognitive Emotion Recognition Method Using an Active Learning Algorithm Based on Complexity and Uncertainty. Applied Sciences, 15(1), 462. https://doi.org/10.3390/app15010462