INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 258
Emotion Recognition from Facial Expressions Using Convolutional
Neural Networks
1
Anamika.,
1
Manoj Kumar,
1
Jagdeep Singh,
1
Sachin Kumar,
1
Sharad Kumar,
2
Vikas Sharma
1
School of Engineering & Technology, Shri Venkateshwara University, Gajraula, U.P. India
1
Department of Computer Applications, SRM Institute of Science and Technology, Delhi NCR Campus, Ghaziabad, U.P. India
DOI:
https://doi.org/10.51583/IJLTEMAS.2025.1410000036
Received: 02 October 2025; Accepted: 09 October 2025; Published: 07 November 2025
Abstract: Facial expressions are one of the most natural and universal ways of conveying human emotions, making their
automatic recognition a critical component in affective computing and humancomputer interaction. This paper presents a
Convolutional Neural Network (CNN)-based approach for emotion recognition from facial images. The proposed model utilizes
deep feature extraction to capture spatial hierarchies in facial regions, enabling accurate classification of emotions such as
happiness, sadness, anger, surprise, fear, disgust, and neutrality. By training and evaluating the CNN on publicly available
benchmark datasets, the model demonstrates robust performance and generalization across diverse facial variations. Experimental
results highlight the efficiency of CNNs in recognizing subtle emotional cues without relying on handcrafted features. The
proposed approach holds significant potential for applications in mental health monitoring, intelligent tutoring systems, adaptive
user interfaces, and surveillance systems where understanding human emotions is essential.
Keywords: Facial Expression Recognition, Emotion Recognition, Convolutional Neural Networks, Deep Learning, Human
Computer Interaction, Affective Computing.
I. Introduction
Human emotions are fundamental to social interaction and communication, shaping the way individuals respond to different
contexts and environments. Facial expressions, in particular, are one of the most natural and universal mediums for conveying
emotions, often transcending cultural and linguistic barriers. The ability to automatically recognize emotions from facial
expressions has therefore emerged as a crucial research area in fields such as affective computing, humancomputer interaction
(HCI), psychology, and artificial intelligence. With the rapid development of computer vision and deep learning techniques,
particularly Convolutional Neural Networks (CNNs), significant progress has been achieved in the automated analysis and
classification of facial emotions. Emotion recognition through facial expressions has wide-ranging applications. In healthcare, it
can be leveraged for mental health monitoring, early detection of stress, and therapy support systems. In the education sector,
intelligent tutoring systems can benefit from emotion-aware models to adapt teaching strategies in real time based on a learner’s
emotional state. In security and surveillance, emotion recognition provides additional layers of contextual information, enabling
proactive responses to potentially risky situations. Similarly, in entertainment and customer service, recognizing emotions
enhances personalization and improves user satisfaction. Despite these promising applications, emotion recognition remains a
challenging problem due to variations in lighting, occlusion, facial orientation, cultural diversity, and the subtlety of certain
expressions. Colares et al. [1] proposed a dual-input model combining facial images and landmarks, demonstrating that integrating
spatial facial features enhances emotion classification accuracy. Traditional machine learning approaches for facial expression
recognition have relied on handcrafted features such as Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), or
Scale-Invariant Feature Transform (SIFT). While effective to some extent, these methods are limited by their dependency on
manual feature engineering and their inability to generalize across complex, nonlinear variations in facial data. In contrast, deep
learning, and particularly CNNs, have revolutionized image classification tasks by automatically learning hierarchical feature
representations directly from raw input data. CNNs are capable of extracting low-level features such as edges and textures in
early layers, and progressively capturing high-level semantic features such as facial landmarks and expression-specific regions in
deeper layers. This ability makes CNNs exceptionally well-suited for emotion recognition tasks. Recent studies have shown that
CNN-based models outperform traditional approaches on several benchmark datasets, including FER-2013, CK+, and JAFFE.
These datasets provide diverse samples of human emotions, enabling CNNs to learn discriminative features for classification
across multiple emotional categories such as happiness, sadness, anger, surprise, fear, disgust, and neutrality. The strength of
CNNs lies not only in their accuracy but also in their scalability, making them adaptable to real-world applications where data
may be noisy or incomplete. Moreover, with the integration of techniques such as data augmentation, dropout, and transfer
learning, CNN-based emotion recognition systems can achieve robust performance even with limited training data. Another
advantage of CNNs in facial expression recognition is their ability to work in real time. With optimized architectures and the
availability of high-performance GPUs, CNN models can process images or video frames quickly, allowing for instant emotion
detection. Similarly, Dhankhar and Vir [2] introduced an optimized CNN-RFE-Attention approach, which effectively captures
critical facial regions, improving detection. This is particularly useful in interactive systems such as virtual assistants, autonomous
vehicles, or customer service bots, where immediate recognition of user emotions is critical. Furthermore, the integration of
CNN-based emotion recognition with multimodal datasuch as speech, physiological signals, or body gesturescan lead to
more comprehensive affective computing frameworks, offering a deeper understanding of human emotional states. However,
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue VI, June 2025
www.ijltemas.in Page 259
challenges remain in developing universally robust CNN-based emotion recognition systems. Variations in facial expressions due
to cultural differences, age, gender, or social context can affect recognition accuracy. Additionally, spontaneous facial
expressions, which occur naturally in daily interactions, are often more subtle and harder to classify compared to posed
expressions typically found in datasets. Addressing these challenges requires the development of more diverse training datasets,
advanced preprocessing techniques, and architectures capable of capturing fine-grained emotional cues. Ethical considerations,
such as data privacy, bias, and the responsible use of emotion recognition technologies, must also be carefully addressed to ensure
trust and transparency in real-world deployments.
II. Literature Review
Facial emotion recognition (FER) has witnessed significant advancements with the adoption of deep learning techniques,
particularly Convolutional Neural Networks (CNNs), which have outperformed traditional handcrafted feature-based approaches.
Almulla [3] emphasized the potential of deep CNNs for FER, highlighting their ability to automatically learn hierarchical
representations directly from raw facial images. Attention mechanisms and hybrid models have also been explored to boost FER
performance. Kumar et al. [4] developed an Attention-based ConvNet-Recurrent Neural Network framework, combining spatial
and temporal features for more robust emotion detection. Khan et al. [5] leveraged deep learning to predict semantic ratings
alongside facial expressions, showing that deep architectures can simultaneously handle recognition and affective interpretation
tasks. Hussain et al. [6] demonstrated that standard CNN models, when properly trained and augmented, can achieve high accuracy
in classifying basic emotions, reinforcing the effectiveness of convolutional approaches for FER. A comprehensive review by
Meher and Verma [7] summarized recent CNN-based FER methodologies, highlighting the trend toward deeper architectures, data
augmentation, and hybrid models to address challenges such as imbalanced datasets and occlusions. Local feature extraction
methods have also been investigated for their ability to capture fine-grained expressions. Lu et al. [8] proposed a local-feature-
based FER model, which improved recognition of subtle emotions by focusing on region-specific facial patterns. Multimodal
approaches have been increasingly considered, combining facial cues with speech signals to enhance recognition accuracy.
Avabratha et al. [9] presented a multimodal analysis using CNNs and Random Forest classifiers, demonstrating that integrating
audio and visual features significantly improves emotion prediction reliability. Recent studies have explored practical applications
of FER in various domains. Mishra et al. [10] highlighted the use of deep learning for real-world emotion recognition, while
Harianto et al. [11] compared CNNs and LSTM networks for analyzing customer satisfaction via facial emotions, suggesting that
temporal modeling can provide additional context for dynamic interactions. Babu et al. [12] emphasized real-time FER systems for
humanmachine interaction, enabling responsive and adaptive interfaces. Furthermore, the integration of FER with other intelligent
systems has been explored for broader cognitive and behavioural applications, such as image caption generation and brain
computer interfaces [13], [14], as well as systematic meta-analyses of computer vision techniques for emotion detection [15].
Emerging applications in e-commerce and child safety also leverage facial analysis, indicating the increasing relevance of FER in
socially impactful contexts [16], [17]. A recent study [18] provides a comprehensive analysis of security mechanisms in MANETs,
covering cryptographic protocols, intrusion detection systems, secure routing strategies, and trust-based frameworks. The work not
only evaluates the strengths and limitations of these approaches but also offers a structured taxonomy of MANET-specific threats
such as wormhole, black hole, Sybil, and denial-of-service attacks, thereby guiding the development of adaptive and context-aware
security frameworks. A comparative study [19] evaluated multiple deep learning architectures for fake news classification,
analyzing their performance, accuracy, and adaptability to diverse datasets. The findings highlight the strengths of advanced neural
models in capturing linguistic and contextual features of deceptive content, while also pointing out limitations such as
computational cost and vulnerability to evolving misinformation strategies. Overall, these studies highlight that deep learning-
based FER, particularly CNN and hybrid models with attention or multimodal inputs, provides superior performance over
traditional methods. Despite the progress, challenges such as cultural variability, occlusions, spontaneous expressions, and dataset
imbalances remain active areas of research, motivating continued exploration of more robust, adaptive, and real-time FER systems.
III. Proposed Methodology
The proposed framework for Emotion Recognition from Facial Expressions Using Convolutional Neural Networks (CNNs) is
designed to automatically extract discriminative facial features and classify them into predefined emotion categories. The
framework comprises several stages: data preprocessing, data augmentation, CNN-based feature extraction, classification, and
evaluation shown in Fig. 1.
Fig. 1. Automated Emotion Recognition Model Across Different Emotions Using CNNs
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue VI, June 2025
www.ijltemas.in Page 260
1) Data Preprocessing: It plays a crucial role in ensuring that the input facial images are standardized before being passed into
the CNN model. The process begins with face detection and cropping, where algorithms such as ViolaJones or MTCNN are
applied to identify and isolate the region of interest containing the face. This step removes unnecessary background elements and
enhances the model’s focus on facial regions relevant to expression analysis. The cropped images are then converted to grayscale
to reduce computational complexity, as emotional cues are more closely associated with textures and structural patterns than with
color information. Pixel normalization is performed to scale values between 0 and 1, improving training stability and
convergence. Finally, all images are resized to a uniform resolution (e.g., 48×48 pixels), which ensures consistency across the
dataset and compatibility with the CNN input layer.
2) Data Augmentation: To improve generalization and prevent overfitting, data augmentation techniques are applied to
artificially expand the training dataset. Augmentation introduces controlled variability in the input data, allowing the CNN model
to learn robust features that remain effective under real-world conditions. This includes random rotations, horizontal flipping,
shifting, zooming, and brightness adjustments. For instance, a slight rotation simulates natural head tilts, while horizontal flips
account for leftright asymmetries in facial expressions. Such transformations generate new variations of existing samples,
thereby increasing the effective dataset size without requiring additional data collection. As a result, the CNN model becomes less
sensitive to variations in orientation, lighting, and scale, ultimately enhancing its recognition accuracy across diverse scenarios.
3) CNN-Based Feature Extraction: The core of the proposed framework lies in the use of CNNs for hierarchical feature
extraction. Unlike traditional machine learning methods that rely on handcrafted features, CNNs learn discriminative patterns
directly from raw image data. The architecture begins with convolutional layers that apply filters to capture low-level features
such as edges, textures, and corners. Deeper layers progressively extract higher-level representations, including facial landmarks
and expression-specific details. Each convolutional operation is followed by a Rectified Linear Unit (ReLU) activation,
introducing nonlinearity and enabling the network to model complex patterns. Pooling layers, particularly max pooling, are
employed to reduce spatial dimensions while retaining essential information, thereby improving computational efficiency.
Dropout layers are incorporated to reduce overfitting by randomly deactivating a fraction of neurons during training. Togethe r,
these components enable the CNN to learn robust and generalizable representations of facial expressions.
4) Classification: Once the features are extracted through convolutional and pooling operations, they are flattened and passed
into fully connected layers that combine the learned representations. These layers integrate localized patterns into a global
understanding of the facial expression. The final stage of classification is achieved through a SoftMax output layer, which
converts the feature activations into class probabilities. Each probability corresponds to an emotion category, such as happiness,
sadness, anger, fear, disgust, surprise, or neutrality. The emotion with the highest probability is selected as the predicted class.
The use of SoftMax ensures that the model outputs a normalized distribution, making it suitable for multi-class classification
tasks.
5) Evaluation: The performance of the proposed CNN-based framework is evaluated using established metrics to ensure its
reliability and effectiveness. Accuracy is measured to assess the overall proportion of correctly classified samples. Additionally,
precision, recall, and F1-score are used to provide class-wise insights, which are particularly important in addressing imbalanced
datasets where certain emotions may be underrepresented. A confusion matrix is also generated to visualize the distribution of
correct and incorrect predictions across all emotion categories. This allows for detailed analysis of common misclassifications,
such as confusion between fear and surprise due to their visual similarity. By employing these metrics, the system’s performance
is comprehensively validated, ensuring its applicability in real-world emotion recognition tasks.
IV. Result & Analysis
The proposed CNN-based framework for facial expression-based emotion recognition was evaluated on the FER-2013
benchmark dataset, which consists of more than 35,000 grayscale images categorized into seven distinct emotion classes:
happiness, sadness, anger, fear, disgust, surprise, and neutrality. The dataset was divided into 70% training, 15% validation, and
15% testing, ensuring a balanced distribution of samples across all categories. To assess the performance of the system, standard
evaluation metrics including accuracy, precision, recall, F1-score, and confusion matrix were employed. The development and
execution of the proposed Convolutional Neural Network (CNN) framework for emotion recognition from facial expressions
demand both hardware and software resources capable of handling deep learning operations efficiently. On the hardware side, a
system with a high-performance processor such as an Intel i7 is recommended to support computationally intensive tasks. A
minimum of 16 GB of RAM is desirable to ensure smooth handling of large datasets during training and validation phases. Since
CNN training involves extensive matrix operations and backpropagation, a dedicated Graphics Processing Unit (GPU) such as
NVIDIA GTX/RTX series or equivalent with at least 4 GB of VRAM is essential to accelerate the training process and reduce
computational time. Additionally, a solid-state drive (SSD) with sufficient storage capacity, preferably above 500 GB, is
necessary for storing datasets, trained models, and intermediate results. From a software perspective, the system requires a stable
operating environment, typically a Linux-based distribution such as Ubuntu or a Windows 10/11 platform with support for
machine learning frameworks. Python is employed as the primary programming language due to its extensive support for
scientific computing and deep learning libraries. Frameworks such as TensorFlow or PyTorch are used for designing and training
the CNN model, while supporting libraries including NumPy, Pandas, Matplotlib, and OpenCV facilitate data handling,
preprocessing, visualization, and analysis. For efficient experiment tracking and reproducibility, tools such as Jupyter Notebook
or integrated development environments (IDEs) like PyCharm can be employed. Furthermore, CUDA and cuDNN libraries are
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue VI, June 2025
www.ijltemas.in Page 261
essential for enabling GPU acceleration, ensuring that the CNN can be trained in a reasonable time frame. Overall, the system
requirements are designed to balance efficiency, scalability, and reproducibility, providing a robust environment for emotion
recognition research and deployment.
To comprehensively evaluate the effectiveness of the proposed Convolutional Neural Network (CNN) for emotion recognition,
several performance metrics were employed rather than relying solely on overall accuracy. Since the FER-2013 dataset includes
multiple classes with imbalanced distributions of emotions, a diverse set of evaluation measures is required to capture the
strengths and weaknesses of the model. The chosen metrics include Accuracy, Precision, Recall, and F1-Score, each offering
distinct insights into the system’s classification behavior.
1) Accuracy: Accuracy is the most widely used performance measure in classification tasks. It indicates the overall proportion of
correctly predicted labels out of the total samples. In the context of facial emotion recognition, accuracy shows how effectively
the CNN classifies emotions across all seven categories. While accuracy gives a quick overview of performance, it may not
always capture the model’s ability to handle imbalanced datasets where certain emotions have fewer samples. TABLE I shows
the accuracy of emotion recognition for seven emotional classesHappiness, Sadness, Anger, Fear, Disgust, Surprise, and
Neutralalong with the number of samples for each class. Fig. 2. Shows the precision of emotion recognition for seven
emotional classes: Happiness, Sadness, Anger, Fear, Disgust, Surprise, and Neutral. Happiness shows the highest precision,
followed by Surprise and Neutral, while Disgust has the lowest precision.
Table I Emotion Recognition Accuracy Across Different Classes
Emotion
Accuracy
Samples
Happiness
0.92
6,215
Sadness
0.88
4,850
Anger
0.85
4,132
Fear
0.82
3,847
Disgust
0.8
2,500
Surprise
0.9
5,056
Neutral
0.87
6,500
Overall
0.86
35,887
Fig. 2. Accuracy of Emotion Recognition Model Across Different Emotions
2) Precision: Precision measures the proportion of correctly identified positive predictions against all instances that the model
labelled as positive. High precision indicates that the CNN rarely misclassifies one emotion as another. For example, high
precision for “anger” means that most samples predicted as “anger” are indeed correct. Precision is particularly valuable in
applications like healthcare or surveillance, where minimizing false alarms is crucial. TABLE II shows precision scores for seven
emotionsHappiness (0.87), Sadness (0.74), Anger (0.70), Fear (0.65), Disgust (0.61), Surprise (0.82), and Neutral (0.79)
along with the number of samples for each emotion. Fig. 3. shows the recall scores for seven emotions: Happiness (≈0.85),
Sadness (≈0.72), Anger (≈0.69), Fear (≈0.63), Disgust (≈0.60), Surprise (≈0.81), and Neutral (≈0.77). Happiness and Surprise
show the highest recall, while Disgust has the lowest recall among the emotions.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue VI, June 2025
www.ijltemas.in Page 262
Table II. Precision Scores for Emotion Recognition Across FER-2013 Dataset
Emotion
Precision
Samples
Happiness
0.87
6,215
Sadness
0.74
4,850
Anger
0.7
4,132
Fear
0.65
3,847
Disgust
0.61
2,500
Surprise
0.82
5,056
Neutral
0.79
6,500
Overall
0.77
35,887
Fig. 3. Recall Scores for Emotion Recognition Across FER-2013 Dataset
3) Recall (Sensitivity): Recall measures the proportion of actual positive cases that were correctly identified by the model. In
emotion recognition, recall ensures that the CNN does not miss detecting critical emotions such as “fear” or “sadness,” which are
vital in applications like mental health monitoring. High recall ensures that most true instances of an emotion are detected, even if
some false positives occur. TABLE III illustrates the recall performance of an emotion recognition model for seven emotions.
Happiness has a recall of 0.85 with 6,215 samples, sadness 0.71 with 4,850 samples, anger 0.68 with 4,132 samples, fear 0.62
with 3,847 samples, disgust 0.59 with 2,500 samples, surprise 0.80 with 5,056 samples, and neutral 0.76 with 6,500 samples. Fig.
4. illustrating the F1-scores of an emotion recognition model for seven emotions.
Table III. Recall Scores of Emotion Recognition Model Across Different Emotions
Emotion
Recall
Samples
Happiness
0.85
6,215
Sadness
0.71
4,850
Anger
0.68
4,132
Fear
0.62
3,847
Disgust
0.59
2,500
Surprise
0.8
5,056
Neutral
0.76
6,500
Overall
0.74
35,887
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue VI, June 2025
www.ijltemas.in Page 263
Fig. 4. F1-Score of Emotion Recognition Model Across Different Emotions
4) F1-Score: The F1-score is the harmonic mean of precision and recall, providing a balanced metric when both false positives
and false negatives must be minimized. It is especially important in datasets like FER-2013, where certain classes (e.g., “disgust”)
have significantly fewer samples. By combining precision and recall, the F1-score ensures a fairer evaluation of the CNN’s
performance across different emotions. Fig. 5. shows the accuracy of a facial emotion recognition model for seven emotions. The
x-axis lists the emotions: Happiness, Sadness, Anger, Fear, Disgust, Surprise, and Neutral. TABLE IV. displays the F1-Score
performance of a facial emotion recognition model across seven emotions.
Table IV F1-Score Results for Emotion Recognition on FER-2013 Dataset
Emotion
F1-Score
Samples
Happiness
0.86
6,215
Sadness
0.72
4,850
Anger
0.69
4,132
Fear
0.63
3,847
Disgust
0.6
2,500
Surprise
0.81
5,056
Neutral
0.77
6,500
Overall
0.75
35,887
Fig. 5. Accuracy of Emotion Recognition Model Across Different Emotions
Together, these metrics offered a comprehensive evaluation framework that goes beyond overall accuracy, ensuring reliable and
fair assessment of the CNN’s ability to recognize diverse facial expressions.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue VI, June 2025
www.ijltemas.in Page 264
V. Conclusion
This research comprehensively evaluated the effectiveness of a CNN-based framework for automatic facial emotion recognition,
achieving an overall accuracy of 86% and robust performance across precision, recall, and F1-score metrics on the FER-2013
dataset. By leveraging deep hierarchical feature extraction, data augmentation, and optimized CNN architectures, the proposed
approach successfully captures subtle emotional cues without relying on handcrafted features, making it suitable for real-world
applications in mental health monitoring, adaptive tutoring, humancomputer interaction, and surveillance systems. Despite these
promising results, challenges such as variations in cultural expression, spontaneous facial movements, occlusions, and
imbalanced datasets persist. Future research directions include integrating multimodal data such as speech, physiological signals,
and body gestures to enhance emotion recognition accuracy, developing more diverse and representative datasets, exploring
lightweight and real-time CNN models for deployment on edge devices, and addressing ethical considerations related to privacy,
fairness, and responsible AI deployment to ensure trustworthy and socially beneficial emotion-aware systems.
References
1. W. G. Colares, M. G. F. Costa and C. F. F. Costa Filho, "Enhancing Emotion Recognition: A Dual-Input Model for
Facial Expression Recognition Using Images and Facial Landmarks," 2024 46th Annual International Conference of the
IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 2024, pp. 1-5, doi:
10.1109/EMBC53108.2024.10782924.
2. G. Dhankhar and R. Vir, "Implementing Human Emotion Detection using Convolutional Neural Networks: An
Optimized CNN-RFE-Attention Approach," 2025 5th International Conference on Pervasive Computing and Social
Networking (ICPCSN), Salem, India, 2025, pp. 1647-1652, doi: 10.1109/ICPCSN65854.2025.11035604.
3. M. A. Almulla, "Facial Expression Recognition Using Deep Convolution Neural Networks," 2024 IEEE Annual
Congress on Artificial Intelligence of Things (AIoT), Melbourne, Australia, 2024, pp. 69-71, doi:
10.1109/AIoT63253.2024.00022.
4. S. V. Kumar, G. Sunil, R. R. Hussein, S. Manju Vidhya and S. M. Sundaram, "Attention based ConVnet-Recurrent
Neural Network for Facial Recognition and Emotion Detection," 2024 International Conference on Intelligent
Algorithms for Computational Intelligence Systems (IACIS), Hassan, India, 2024, pp. 1-5, doi:
10.1109/IACIS61494.2024.10721873.
5. S. S. Khan, D. Sengupta, A. Ghosh and A. Chaudhuri, "Decoding Emotions: Facial Expression Recognition and
Semantic Rating Prediction with Deep Learning," 2025 Devices for Integrated Circuit (DevIC), Kalyani, India, 2025, pp.
820-825, doi: 10.1109/DevIC63749.2025.11012558.
6. S. A. Hussain, N. Y. Reddy, J. Srivardhan, A. Sharma, S. Sharma and S. Gochhait, "Automated Emotion Recognition
from Facial Expressions using Convolutional Neural Network," 2024 5th International Conference on Data Analytics for
Business and Industry (ICDABI), Zallaq, Bahrain, 2024, pp. 139-143, doi: 10.1109/ICDABI63787.2024.10800158.
7. H. Meher and B. Verma, "A review on Facial Emotion Recognition using CNN," 2024 1st International Conference on
Advances in Computing, Communication and Networking (ICAC2N), Greater Noida, India, 2024, pp. 916-920, doi:
10.1109/ICAC2N63387.2024.10895524.
8. J. Lu, Y. Ma, F. Zheng and J. Wang, "Facial Emotion Recognition Based on Local Feature," 2024 4th International
Conference on Robotics, Automation and Intelligent Control (ICRAIC), Changsha, China, 2024, pp. 603-606, doi:
10.1109/ICRAIC65937.2024.00112.
9. V. V. Avabratha, S. Rana, S. Narayan, S. Y. Raju and S. S, "Speech and Facial Emotion Recognition using
Convolutional Neural Network and Random Forest: A Multimodal Analysis," 2024 Asia Pacific Conference on
Innovation in Technology (APCIT), MYSORE, India, 2024, pp. 1-5, doi: 10.1109/APCIT62007.2024.10673495.
10. P. Mishra, A. S. Verma, P. Chaudhary and A. Dutta, "Emotion Recognition from Facial Expression Using Deep
Learning Techniques," 2024 IEEE 9th International Conference for Convergence in Technology (I2CT), Pune, India,
2024, pp. 1-6, doi: 10.1109/I2CT61223.2024.10543313.
11. D. Harianto, S. Filbert, A. B. Cahyakusuma and A. Y. Zakiyyah, "Analyzing Customer Satisfaction Through Face
Emotion Recognition: A Comparative Study of Convolutional Neural Networks (CNN) and Long Short Term Memory
(LSTM)," 2024 10th International Conference on Smart Computing and Communication (ICSCC), Bali, Indonesia,
2024, pp. 50-54, doi: 10.1109/ICSCC62041.2024.10690302.
12. T. Babu, E. P. M and R. R. Nair, "Real-Time Facial Expression Recognition Using Deep Learning for Enhanced Human-
Machine Interactions," 2024 International BIT Conference (BITCON), Dhanbad, India, 2024, pp. 1-6, doi:
10.1109/BITCON63716.2024.10984634.
13. V. Sharma, K. K. Sharma, T. K. Vashishth, R. Panwar, B. Kumar and S. Chaudhary, "Brain-Computer Interface:
Bridging the Gap Between Human Brain and Computing Systems," 2023 International Conference on Research
Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE),
Chennai, India, 2023, pp. 1-5, doi: 10.1109/RMKMATE59243.2023.10369702.
14. V. Sharma, Alekh, K. Chaudhary, T. K. Vashishth, S. Chaudhary and B. Kumar, "Fusing Depths: Investigating the
Synergy of Convolutional Neural Networks and Long Short-Term Memory Networks for Enhanced Image Caption
Generation," 2024 International Conference on Cognitive Robotics and Intelligent Systems (ICC - ROBINS),
Coimbatore, India, 2024, pp. 103-107, doi: 10.1109/ICC-ROBINS60238.2024.10533921.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue VI, June 2025
www.ijltemas.in Page 265
15. T. K. Vashishth, Vikas, B. Kumar, R. Panwar, S. Kumar and S. Chaudhary, "Exploring the Role of Computer Vision in
Human Emotion Recognition: A Systematic Review and Meta-Analysis," 2023 Second International Conference on
Augmented Intelligence and Sustainable Systems (ICAISS), Trichy, India, 2023, pp. 1071-1077, doi:
10.1109/ICAISS58487.2023.10250614.
16. R. Sharma, V. Sharma, T. K. Vashishth, S. Shashi, A. Pandey and S. Chaudhary, "Revealing the Reliability of Amazon
Products via Innovative Fake Review Detection using Machine Learning," 2025 6th International Conference on
Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 2025, pp. 217-221,
doi: 10.1109/ICICV64824.2025.11086089.
17. V. Sharma, T. K. Vashishth, S. Kumar, R. Kumar, S. Chaudhary and S. Vidyant, "Next-Gen Child Search: Advanced
Predictions with Facial Features and Skin Color," 2025 Seventh International Conference on Computational Intelligence
andCommunication Technologies (CCICT), Sonepat, India, 2025, pp. 551-556, doi: 10.1109/CCICT65753.2025.00089.
18. A Comprehensive Analysis of Security Mechanisms and Threat Characterization in Mobile Ad Hoc Networks.
(2025). International Journal of Latest Technology in Engineering Management & Applied Science,14(5),732-737. doi:
10.51583/IJLTEMAS.2025.140500079
19. A Comparative Study of Deep Learning Models for Fake News Classification. (2025). International Journal of Latest
Technology in Engineering Management & Applied Science, 14(9), 188-
195.
https://doi.org/10.51583/IJLTEMAS.2025.1409000026