Page 748
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Emotion-Aware Wearable Health Monitoring System Using ML-
Based Sentiment Inference
Manikandan B
1
, Pradeepkumar S
2
, Thirumoorthi S
2
, Vishwa S
2
, Vedhesh K
2
1
Assistant Professor, IT, Hindusthan Institute of Technology, Coimbatore
2
Student, Fourth year, IT, Hindusthan Institute of Technology, Coimbatore
DOI:
https://doi.org/10.51583/IJLTEMAS.2026.150300061
Received: 22 March 2026; Accepted: 27 March 2026; Published: 14 April 2026
ABSTRACT
The integration of wearable technology and machine learning has significantly advanced continuous health
monitoring systems. However, most existing wearable solutions focus primarily on physiological parameters
while neglecting emotional and mental health factors that strongly influence overall well-being. This paper
proposes an Emotion-Aware Wearable Health Monitoring System that integrates physiological sensor data with
machine learningbased sentiment inference to provide comprehensive health insights. The system collects real-
time data from heart rate, skin temperature, galvanic skin response (GSR), and accelerometer sensors, along with
contextual inputs such as speech or text. A multi-modal machine learning framework analyzes physiological and
sentiment features to detect emotional states including stress, anxiety, fatigue, and calmness. The inferred
emotional states are correlated with physical health parameters to generate personalized recommendations and
real-time alerts via a cloud-based dashboard. Experimental evaluation using collected sensor data demonstrates
the feasibility of emotion-aware health monitoring for preventive and personalized healthcare applications.
Keywords: Wearable Sensors, Emotion Recognition, Machine Learning, Sentiment Analysis, IoT Healthcare,
Real-Time Monitoring, Personalized Healthcare.
INTRODUCTION
Wearable health monitoring systems have revolutionized modern healthcare by enabling continuous tracking of
physiological parameters such as heart rate, temperature, and physical activity. These systems play a vital role in
preventive medicine, chronic disease management, and remote patient monitoring. However, emotional and
psychological states significantly impact physical health, yet most wearable systems fail to incorporate emotion-
aware intelligence into their monitoring frameworks.
Stress, anxiety, and fatigue are directly linked to cardiovascular disorders, sleep disturbances, and immune
system dysfunction. Traditional wearable devices monitor physiological parameters independently without
interpreting their emotional context. This limitation creates a gap in achieving truly holistic health monitoring.
To address this gap, this work proposes a multi-modal system that integrates physiological biosignals with
sentiment inference derived from contextual data such as speech or text. By combining machine learning and
natural language processing techniques, the system provides a more comprehensive understanding of user health.
Research Objectives
To design a multi-modal emotion-aware health monitoring framework.
To implement machine learning models for emotion classification using physiological signals.
To integrate sentiment analysis with physiological data fusion.
Page 749
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
To develop a secure cloud-based real-time monitoring system.
To enable early detection and preventive healthcare intervention.
Block Diagram
METHODOLOGY
The methodology of this study focuses on experimental design, dataset preparation, model development, and
validation of the emotion-aware wearable health monitoring framework.
Research Design and Data Collection
An experimental study was conducted to collect multi-modal physiological and contextual data under controlled
and semi-controlled emotional conditions. Participants were monitored while exposed to different emotional
stimuli such as relaxation, mild stress tasks, and cognitive workload activities.
Physiological signals including heart rate, HRV, GSR, skin temperature, and motion activity were recorded
continuously. Emotional labels were assigned based on predefined experimental conditions and participant self-
reports. Contextual sentiment data was simultaneously collected through speech or text interactions.
Page 750
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Dataset Preparation and Feature Engineering
The collected dataset was segmented into fixed-length time windows to capture temporal variations in emotional
states. Each segment was labeled according to the corresponding emotional condition.
Feature engineering techniques were applied to extract time-domain and frequency-domain attributes from
physiological signals. Statistical measures such as mean, variance, standard deviation, and peak detection were
computed. For textual inputs, sentiment polarity scores and contextual embeddings were generated.
The final dataset consisted of structured feature vectors representing multi-modal emotional indicators.
Model Training and Validation
The dataset was divided into training and testing sets using an 80:20 split. To improve generalization
performance, k-fold cross-validation was applied during model training.
Supervised machine learning algorithms were trained to classify emotional states. Hyperparameter tuning was
performed to optimize classification accuracy. Model performance was evaluated using standard metrics such
as accuracy, precision, recall, and F1-score.
Performance Evaluation Strategy
To assess robustness, the proposed multi-modal model was compared against single-sensor baseline models.
Performance improvements were analyzed through confusion matrices and classification reports.
Additionally, system latency and response time were measured to evaluate real-time feasibility. Scalability
testing was conducted to analyze system performance under multiple concurrent data streams.
Proposed System Architecture
The proposed architecture consists of interconnected hardware and software modules designed for real-time
monitoring and analysis.
Data Acquisition Module
Wearable sensors collect continuous physiological data including:
Heart Rate Variability (HRV)
Galvanic Skin Response (GSR)
Skin Temperature
Motion/Activity Data via Accelerometer
Contextual inputs such as speech or typed text are captured through mobile devices. All data streams are time-
synchronized before processing.
Data Preprocessing and Feature Extraction
Raw sensor data often contains noise due to motion artifacts and environmental interference. Therefore,
preprocessing includes:
Signal filtering using low-pass filters
Normalization and scaling
Page 751
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Missing value handling
Time-series segmentation
Extracted features include:
Statistical features (mean, variance, standard deviation)
Frequency-domain features
HRV time-domain metrics
Skin conductance response peaks
These features are fed into machine learning classifiers.
Emotion Classification Model
Supervised learning algorithms such as:
Support Vector Machine (SVM)
Random Forest
Artificial Neural Networks (ANN)
Long Short-Term Memory (LSTM)
are trained to classify emotional states into predefined categories (stress, calm, fatigue, anxiety).
Natural Language Processing (NLP) techniques such as tokenization, sentiment scoring, and transformer-based
embeddings are applied to contextual speech/text data.
A multi-modal fusion layer integrates physiological and sentiment features to improve prediction accuracy.
Cloud Integration and Dashboard
The processed data is securely transmitted to a cloud server using encrypted communication protocols.
The cloud platform provides:
Real-time data storage
Dashboard visualization
Historical trend analysis
Alert generation system
Remote monitoring access for healthcare providers
Alert and Recommendation System
If abnormal emotional or physiological thresholds are detected:
Immediate alerts are sent to users
Notifications can be forwarded to caregivers
Page 752
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Personalized recommendations (breathing exercises, rest suggestions) are generated
This supports preventive and proactive healthcare management.
Experimental
Setup
To evaluate system performance, data is collected from multiple participants under controlled and natural
emotional conditions.
Dataset Description
Number of participants: (To be added)
Duration of monitoring: (To be added)
Emotional states labeled manually or through controlled stimuli
Model Training
Data split: 80% training, 20% testing
Cross-validation: 5-fold cross-validation
Hyperparameter tuning performed
Evaluation Metrics
Accuracy
Precision
Recall
F1-score
Confusion Matrix
ROC Curve.
RESULTS AND DISCUSSIONS
The proposed multi-modal emotion-aware health monitoring system was evaluated to ensure the classification
performance, robustness, and efficient operation of the system in real-time. The experimental results show that
the integration of physiological signals with sentiment inference results improves the accuracy of emotion
classification compared to other approaches that use single modalities.
The performance of the proposed multi-modal fusion model was found to be more stable in classification due to
the complementary nature of the features obtained from heart rate variability, galvanic skin response, skin
temperature, and contextual sentiment analysis. The model was found to be less prone to misclassification of
emotions like stress and physical exertion compared to other approaches. The performance of the proposed
approach was also found to be improved compared to other approaches using single sensors. The results show
that the proposed approach reduces the chances of misclassification of emotions like stress. The inclusion of
contextual sentiment analysis reduced the chances of false positives in detecting emotions like stress. The results
of the analysis of the confusion matrix show improved discrimination of emotions like anxiety and fatigue.
The latency test results also confirmed that the system was able to maintain a stable transmission with minimal
delay during real-time cloud synchronization. Hence, the system was found to be effective for continuous
Page 753
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
monitoring applications. In addition, the average response time for generating alerts was within the acceptable
real-time healthcare system thresholds. Finally, the scalability test confirmed that the system was able to handle
multiple concurrent users without compromising system performance.The experimental results confirmed that
multi-modal emotion-aware monitoring is effective for improving the reliability of predictions and supporting
preventive health care by recognizing early signs of emotional distress before severe physiological problems
arise.
Ethical Considerations
Ethical compliance is an essential aspect of the system, especially when dealing with physiological and
emotional data collection. All the participants must agree and provide informed consent before the data collection
begins. The participants must have full knowledge of the data collected, the time taken, and the use of the
collected data.
Personal identification details such as names, contact details, or biometric identity must be removed to ensure
the anonymity of the data. Data anonymization and pseudonymization must be implemented before data storage.
Secure encryption protocols must be implemented during data transmission between the wearable devices and
the cloud servers. Access to the data collected must be restricted using role-based access control. Only authorized
personnel must have the right to access the data collected. The system must adhere to data protection regulations
in the health sector. Ethical AI must guide the system, ensuring fairness, transparency, and accountability in
decision-making processes.
The users must have the right to access, modify, or delete the data collected. Ethical AI must guide the system,
ensuring no biases or discrimination occur during emotion classification.
Challenges And Limitations
Sensor Accuracy and Calibration Issues
Wearable sensors may produce noisy or inconsistent data due to motion artifacts, environmental factors, or
improper placement. Variations in skin contact and external temperature can affect readings. Regular calibration
is required to maintain measurement accuracy. Inaccurate sensor data may lead to incorrect emotion
classification. Ensuring high-quality hardware components can reduce such limitations.
Emotional State Complexity
Human emotions are complex and subjective in nature. The same physiological pattern may represent different
emotions for different individuals. Cultural and psychological differences also influence emotional expression.
This makes universal emotion classification challenging. Personalized machine learning models can partially
address this issue.
Data Privacy and Ethical Concerns
The system collects sensitive physiological and emotional data, which raises privacy concerns. Unauthorized
access or data breaches may compromise user confidentiality. Ethical concerns arise when emotional data is
misused or shared without consent. Strong encryption and strict access control mechanisms are essential.
Compliance with healthcare data protection standards is necessary.
CONCLUSION
This paper presented a comprehensive emotion-aware wearable health monitoring framework that integrates
multi-modal physiological signals with machine learning-based sentiment inference. Unlike conventional
wearable systems that focus solely on physical parameters, the proposed approach establishes a correlation
between emotional and physiological indicators to enable holistic health assessment.
The integration of supervised learning models and natural language processing techniques improves emotion
Page 754
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
classification accuracy while reducing false stress detections. The cloud-based scalable architecture and real-
time alert mechanism enhance its applicability in preventive healthcare, remote patient monitoring, corporate
wellness programs, elderly care, and sports performance tracking.
Although certain challenges such as sensor variability, personalization requirements, and privacy concerns
remain, the proposed system demonstrates strong potential for next-generation intelligent healthcare
applications. Future work will focus on large-scale clinical validation, adaptive personalized modeling, edge
computing optimization, and privacy-preserving machine learning techniques for secure large-scale deployment.
REFERENCES
1. D. Nandini, J. Yadav, V. Singh, V. Mohan, and S. Agarwal, “An ensemble deep learning framework for
emotion recognition through wearable devices multi-modal physiological signals,” Sci. Rep., vol. 15, Art.
no. 17263, May 2025, doi:10.1038/s41598-025-99858-0.
2. Y. Liao, Y. Gao, F. Wang, L. Zhang, and Z. Xu, “Emotion recognition with multiple physiological
parameters based on ensemble learning,” Sci. Rep., vol. 15, Art. no. 19869, Jun. 2025.
3. C. Wan, C. Xu, D. Chen, D. Wei, and X. Li, “Emotion recognition based on a limited number of
multimodal physiological signals channels,” Measurement, vol. 242, p. 115940, Jan. 2025.
4. M. H. Yong, “Human emotion recognition using machine learning techniques based on the physiological
signal,” Biomed. Signal Process. Control, vol. 100, p. 107039, Feb. 2025.
5. M. Zhang and Y. Cui, “Self-supervised learning based emotion recognition using physiological signals,”
Front. Hum. Neurosci., vol. 18, Art. no. 1334721, Apr. 2024, doi:10.3389/fnhum.2024.1334721.
6. M. H. Rahmani, M. Symons, O. Sobhani, R. Berkvens, and M. Weyn, EmoWear: Wearable physiological
and motion dataset for emotion recognition and context awareness,” Sci. Data, vol. 11, Art. no. 648, Jun.
2024.
7. Z. Wang and Y. Wang, Emotion recognition based on multimodal physiological electrical signals,”
Front. Neurosci., vol. 19, Art. no. 1512799, Mar. 2025, doi:10.3389/fnins.2025.1512799.
8. Y. Cai, X. Li, and J. Li, “Emotion recognition using different sensors, emotion models, methods and
datasets: A comprehensive review,” Sensors, vol. 23, no. 5, p. 2455, 2023.
9. Y. Kacimi and M. Adda, “Comprehensive review of physiological signal-based emotion recognition:
Methods, challenges, and insights on arousal and valence dimensions,” Procedia Comput. Sci., 2025.
10. P. Bute, A. Kadam, A. Mehla, A. Panwar, and H. Patil, Real-time emotion recognition using wearable
devices and deep learning,” J. Trends Chall. Artif. Intell., vol. 2, no. 3, 2025.