INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Communication-Aware Deep Learning Models for Real-Time Solar  
Energy Forecasting in Intelligent Power Networks  
Curllie Jeremiah Farmanor, Tefera Ephrem Markos  
School of Computer Science and School of Artificial Intelligence, Nanjing University of Information  
Science and Technology  
Received: 21 December 2025; Accepted: 30 December 2025; Published: 08 January 2026  
ABSTRACT  
Precise solar power forecasting is critical for the stability and efficiency of modern intelligent power networks.  
However, the reliability of these forecasts is often compromised by communication network impairments, such  
as latency and packet loss, occurring between solar plants and control centers. This paper proposes a  
Communication-Aware Long Short-Term Memory (Comm-Aware LSTM) framework designed to integrate  
network-state information directly into the forecasting process. We model the system using a distributed  
communication topology consisting of solar plants, edge nodes, and cloud-based control centers.  
Our experimental results demonstrate that the proposed model significantly outperforms traditional Baseline  
LSTM architectures under varying network conditions. Specifically, the Comm-Aware LSTM exhibits superior  
training convergence, achieving lower Mean Squared Error (MSE) while maintaining a negligible computational  
overheadadding only 1.6 more trainable parameters and approximately 0.3 of inference latency. Correlation  
analysis further reveals that by explicitly accounting for latency-induced errors, the model provides robust  
predictions even in high-latency scenarios 0.5s. This research confirms that communication-aware deep learning  
architectures are essential for the next generation of resilient, edge-integrated smart grids.  
Keywords: Solar Power Forecasting, Comm-Aware LSTM, Intelligent Power Networks, Communication  
Latency  
INTRODUCTION  
The rapid global transition toward sustainable energy systems has positioned solar power as a critical component  
of modern electricity generation portfolios. Due to its clean, renewable, and increasingly cost-effective nature,  
photovoltaic (PV) generation is being widely deployed across residential, commercial, and utility-scale power  
systems. However, the inherent intermittency and variability of solar energy, driven by dynamic meteorological  
conditions, pose significant challenges to power system stability, energy management, and operational planning.  
As a result, accurate and real-time solar energy forecasting has become a fundamental requirement for intelligent  
power networks and smart grid infrastructures.  
Short-term solar energy forecasting plays a vital role in multiple grid-level applications, including load  
balancing, unit commitment, frequency regulation, and energy trading. Inaccurate forecasts can lead to  
inefficient resource allocation, increased reserve requirements, and higher operational costs. Traditional  
forecasting approaches, such as persistence models and statistical regression techniques, have demonstrated  
limited capability in capturing the nonlinear and temporal dependencies inherent in solar power generation.  
Consequently, recent research has increasingly adopted machine learning and deep learning methods to enhance  
forecasting accuracy.  
Deep learning models, particularly recurrent neural networks (RNNs) such as Long Short-Term Memory  
(LSTM) and Gated Recurrent Unit (GRU) networks, have shown strong performance in modeling time-series  
data due to their ability to capture long-term temporal dependencies. Convolutional neural networks (CNNs) and  
attention-based architectures have further improved forecasting performance by enabling feature extraction and  
Page 994  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
adaptive weighting of historical information. These approaches have been successfully applied to short-term and  
ultra-short-term solar power prediction tasks, demonstrating substantial improvements over traditional methods  
[1][3].  
In parallel with advances in forecasting algorithms, power systems are undergoing a transformation toward  
intelligent power networks, where distributed energy resources, sensors, and controllers are interconnected  
through heterogeneous communication infrastructures. In such systems, solar generation data is transmitted from  
geographically distributed PV units to edge devices, aggregators, or centralized control centers using wired or  
wireless communication networks. Technologies such as the Internet of Things (IoT), 5G, and software-defined  
networking are increasingly integrated into smart grids to support real-time monitoring and control [4], [5].  
Despite these advancements, most existing deep learningbased solar forecasting studies implicitly assume ideal  
communication conditions, where data is delivered instantaneously, reliably, and without loss. In practical  
intelligent power networks, however, communication links are subject to latency, packet loss, bandwidth  
limitations, and asynchronous data arrival. These network-induced impairments can distort temporal correlations  
in input data, introduce missing observations, and ultimately degrade forecasting accuracy. The impact of  
communication constraints becomes particularly critical for real-time forecasting applications, where delayed or  
incomplete data may lead to suboptimal or unstable grid control decisions.  
Recent studies have begun to investigate communication-aware learning in distributed and edge-based artificial  
intelligence systems, highlighting the importance of jointly considering communication and learning processes  
[6], [7]. However, the application of communication-aware deep learning specifically to solar energy forecasting  
in intelligent power networks remains largely unexplored. There is a clear research gap in developing forecasting  
models that are robust to realistic network conditions and capable of maintaining high accuracy under  
communication impairments.  
To address this gap, this paper proposes a communication-aware deep learning framework for real-time solar  
energy forecasting in intelligent power networks. Unlike conventional approaches, the proposed method  
explicitly models communication latency and packet loss during training and evaluation, enabling the forecasting  
model to learn robust representations under non-ideal data transmission conditions. A PyTorch-based simulation  
framework is developed to jointly emulate solar power dynamics and network-induced impairments, allowing  
systematic performance evaluation under diverse communication scenarios.  
The main contributions of this paper are threefold. First, a realistic communication model is integrated into the  
solar forecasting pipeline to simulate latency, packet loss, and asynchronous data arrival in intelligent power  
networks. Second, a communication-aware deep learning forecasting model is designed and trained to improve  
robustness against network-induced disturbances. Third, extensive simulation-based experiments are conducted  
to analyze forecasting accuracy, robustness, and accuracylatency trade-offs under varying network conditions.  
The remainder of this paper is organized as follows. Section 2 reviews related work on solar energy forecasting,  
deep learning in smart grids, and communication-aware learning. Section 3 presents the system architecture and  
problem formulation. Section 4 describes the proposed communication-aware deep learning framework. Section  
5 details the experimental setup and simulation environment. Section 6 discusses the results and performance  
evaluation. Finally, Section 7 concludes the paper and outlines future research directions.  
RELATED WORK  
This section reviews existing research relevant to solar energy forecasting, deep learning applications in smart  
grids, and communication-aware learning in networked systems. The objective is to position the proposed work  
within the current literature and to highlight the research gap addressed by this study.  
Solar Energy Forecasting Methods  
Solar energy forecasting has been extensively studied due to its importance in power system operation and  
energy management. Early approaches relied on physical and statistical models, including clear-sky models,  
Page 995  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
autoregressive integrated moving average (ARIMA), and regression-based techniques. While these methods are  
computationally efficient, they often fail to capture the nonlinear and highly dynamic characteristics of solar  
power generation, particularly under rapidly changing weather conditions.  
To overcome these limitations, machine learning methods such as support vector machines, random forests, and  
k-nearest neighbours have been applied to short-term and ultra-short-term solar forecasting. These approaches  
improve prediction accuracy by learning nonlinear relationships between meteorological variables and power  
output. However, their performance is still constrained when dealing with long temporal dependencies and high-  
dimensional input data.  
More recently, deep learning techniques have become the dominant paradigm in solar energy forecasting.  
Recurrent neural networks, especially Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU)  
architectures, have demonstrated strong capability in modeling sequential solar power data by retaining temporal  
information over extended horizons. Convolutional neural networks (CNNs) have been used to extract spatial  
and temporal features from irradiance maps and satellite imagery, while hybrid CNNLSTM models combine  
feature extraction and temporal modeling to further enhance forecasting accuracy. Attention-based and  
Transformer models have also been explored to dynamically weight historical observations and improve  
robustness to temporal variability [8][11].  
Despite these advances, most deep learningbased solar forecasting studies assume that input data is  
continuously available and free from transmission delays or losses. This assumption limits the applicability of  
such models in real-world intelligent power networks, where data acquisition and transmission are subject to  
communication constraints.  
Deep Learning in Smart Grids and Intelligent Power Networks  
The integration of deep learning into smart grids has enabled advanced functionalities such as load forecasting,  
fault detection, demand response, and renewable energy integration. With the proliferation of distributed energy  
resources and sensing devices, intelligent power networks increasingly rely on data-driven methods for real-time  
monitoring and control. Deep learning models are often deployed at centralized cloud platforms or distributed  
across edge computing nodes to reduce latency and communication overhead.  
Edgecloud collaborative learning frameworks have been proposed to balance computational efficiency and  
communication cost in smart grid applications. In such architectures, preliminary data processing or inference  
is performed at edge devices, while more complex model training or aggregation is handled in the cloud. These  
approaches reduce end-to-end latency and improve scalability, particularly in large-scale power networks [12],  
[13].  
However, existing deep learning applications in smart grids largely focus on improving prediction accuracy or  
computational efficiency, with limited consideration of the impact of communication impairments on learning  
performance. The interaction between communication networks and forecasting models is often abstracted away,  
resulting in overly optimistic performance evaluations.  
Communication-Aware and Networked Learning Systems  
Communication-aware learning has emerged as an important research direction in distributed artificial  
intelligence, federated learning, and edge computing systems. Prior studies have examined the effects of latency,  
packet loss, and bandwidth constraints on distributed model training and inference. Techniques such as gradient  
compression, asynchronous updates, and delay-tolerant optimization have been proposed to mitigate  
communication overhead and improve learning robustness in networked environments [14][16].  
In time-sensitive applications, communication delays can significantly affect model performance by introducing  
stale or missing data. Recent works have incorporated network state information into learning processes,  
enabling models to adapt to varying communication conditions. These studies demonstrate that joint  
Page 996  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
optimization of communication and learning can yield superior performance compared to communication-  
agnostic approaches.  
Nevertheless, the majority of communication-aware learning research has focused on generic machine learning  
tasks or computer vision applications, with limited attention to energy systems and renewable power forecasting.  
The unique characteristics of solar energy data, including strong temporal patterns and weather dependence,  
necessitate domain-specific investigation.  
Research Gap and Motivation  
Based on the above review, it is evident that deep learningbased solar energy forecasting has achieved  
significant accuracy improvements under idealized assumptions, while communication-aware learning has  
progressed primarily in non-energy domains. There is a lack of integrated studies that jointly consider solar  
power forecasting and realistic communication constraints within intelligent power networks.  
This paper addresses this gap by developing a communication-aware deep learning framework specifically  
tailored for real-time solar energy forecasting. By explicitly modeling latency, packet loss, and asynchronous  
data arrival during training and evaluation, the proposed approach provides a more realistic assessment of  
forecasting performance and offers practical insights for deploying deep learning models in intelligent power  
networks.  
System Architecture and Problem Formulation  
This section presents the system architecture of the intelligent power network considered in this study and  
formally defines the solar energy forecasting problem under communication constraints. The objective is to  
establish a realistic operational context in which solar generation data is acquired, transmitted, and processed for  
real-time forecasting.  
Intelligent Power Network Architecture  
The intelligent power network is composed of geographically distributed photovoltaic (PV) generation units,  
communication-enabled sensing devices, edge computing nodes, and a central control or energy management  
system. Each PV unit is equipped with sensors that continuously measure solar power output and relevant  
meteorological variables, such as solar irradiance, ambient temperature, and cloud coverage. These  
measurements are sampled at fixed time intervals and transmitted over communication networks to downstream  
processing units.  
At the network edge, edge computing nodes or local aggregators receive data from multiple PV units. These  
nodes may perform preliminary data pre-processing, buffering, or inference tasks to reduce communication  
overhead and response latency. Aggregated data is then forwarded to a central control center or cloud-based  
platform, where global forecasting, grid optimization, and control decisions are executed. Communication links  
between PV units, edge nodes, and the control center may rely on heterogeneous technologies, including wired  
networks, wireless sensor networks, cellular systems, and emerging 5G-enabled infrastructures.  
In practical deployments, the communication network introduces non-ideal effects such as variable latency,  
packet loss, limited bandwidth, and asynchronous data arrival. These effects disrupt the temporal consistency of  
the received data streams and pose challenges for real-time forecasting models that rely on sequential  
information.  
Page 997  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Figure 1 Communication Topology of Intelligent Power Network  
Communication Model  
To capture realistic network behavior, communication between sensing nodes and forecasting units is modeled  
as a stochastic process characterized by latency and packet loss. Latency represents the end-to-end delay  
experienced by a data packet during transmission, including queuing, processing, and propagation delays. Packet  
loss accounts for the probability that transmitted data fails to reach the destination due to congestion,  
interference, or link failures.  
Let denote the communication delay associated with the data sample collected at time step . Due to network  
variability, is modeled as a random variable following a configurable probability distribution. Packet loss is  
represented by a Bernoulli process, where each data sample is either successfully received or lost. Lost samples  
result in missing observations at the forecasting unit, while delayed samples may arrive out of order or outside  
the expected time window.  
These communication impairments collectively lead to asynchronous and incomplete input sequences, which  
deviate from the idealized assumptions commonly made in solar forecasting studies.  
Forecasting Problem Definition  
The objective of the forecasting system is to predict short-term solar power generation based on historical  
observations received over the communication network. Let represent the multivariate input vector at time  
step , consisting of solar power measurements and associated meteorological features. Given a historical  
window of length , the forecasting task is to predict the solar power output at a future time step ꢀ + 휏, where 휏  
denotes the prediction horizon.  
Under ideal communication conditions, the forecasting model receives a complete and temporally ordered  
(
)
sequence − 푇 + 1, … , 푋. In contrast, under realistic network conditions, the received sequence may contain  
delayed, missing, or misaligned samples due to communication latency and packet loss. The forecasting problem  
thus becomes one of learning a mapping from an impaired input sequence to the future solar power output.  
( )  
Formally, the forecasting objective can be expressed as learning a function 푓 ⋅ such that  
(
)
,
̂  
+ꢁ  
= 푓 ̃  
+1:푡  
where ̃  
denotes the communication-impaired input sequence and ̂ is the predicted solar power  
+ꢁ  
+1:푡  
output.  
Page 998  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Communication-Aware Learning Objective  
The goal of communication-aware learning is to design a forecasting model that maintains high prediction  
accuracy despite network-induced impairments. This requires the model to be robust to missing data, tolerant of  
delayed inputs, and capable of exploiting partial temporal information. Unlike communication-agnostic models  
trained solely on idealized datasets, the proposed approach incorporates communication effects directly into the  
training and evaluation process.  
The learning objective is therefore to minimize forecasting error while accounting for communication-induced  
uncertainty. By exposing the model to diverse network conditions during training, the forecasting system can  
learn representations that generalize well to real-world intelligent power network deployments.  
Communication-Aware Deep Learning Framework  
This section presents the proposed communication-aware deep learning framework for real-time solar energy  
forecasting in intelligent power networks. The framework is designed to explicitly account for network-induced  
impairments, including latency, packet loss, and asynchronous data arrival, while maintaining high forecasting  
accuracy. A baseline deep learning model is first introduced, followed by the proposed communication-aware  
extensions.  
Baseline Deep Learning Forecasting Model  
As a baseline, a recurrent neural network based on the Long Short-Term Memory (LSTM) architecture is  
employed to model the temporal dynamics of solar power generation. LSTM networks are well suited for time-  
series forecasting due to their ability to capture long-term dependencies and mitigate vanishing gradient issues.  
The baseline model takes as input a fixed-length historical sequence of solar power and meteorological features  
and outputs a short-term prediction of future solar power generation. Under ideal communication conditions, the  
input sequence is assumed to be complete, temporally ordered, and uniformly sampled. The baseline LSTM is  
trained using a mean squared error loss function and serves as a communication-agnostic reference for  
performance comparison.  
Communication-Aware Input Representation  
In realistic intelligent power networks, the input data stream received by the forecasting model may be  
incomplete or temporally misaligned due to communication latency and packet loss. To address this issue, the  
proposed framework introduces a communication-aware input representation that augments the original feature  
set with network-related information.  
Specifically, each input sample is associated with a latency indicator representing the transmission delay  
experienced by the data packet. Missing samples resulting from packet loss are explicitly encoded using masking  
mechanisms rather than simple interpolation, allowing the model to distinguish between genuine zero values and  
unavailable observations. This enriched representation enables the forecasting model to learn the relationship  
between network conditions and data reliability.  
Communication-Aware LSTM Architecture  
Building upon the baseline LSTM, a communication-aware LSTM architecture is developed by integrating delay  
and availability information into the learning process. The augmented input sequence includes both the original  
measurement features and additional channels that encode latency and data availability. This allows the recurrent  
network to modulate its internal state updates based on the reliability and freshness of incoming data.  
To improve robustness, the model is trained on datasets generated under diverse communication scenarios,  
including varying levels of latency and packet loss. By exposing the network to impaired input sequences during  
Page 999  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
training, the model learns to adaptively weight historical information and mitigate the impact of missing or  
delayed samples.  
Latency-Weighted Loss Function  
In real-time forecasting applications, prediction errors associated with stale or highly delayed data can be more  
detrimental than errors under timely conditions. To reflect this practical consideration, a latency-weighted loss  
function is introduced. The standard mean squared error is scaled by a weight factor that increases with  
communication delay, penalizing inaccurate predictions made under adverse network conditions.  
This loss formulation encourages the model to prioritize robustness and accuracy when operating under realistic  
communication constraints. The resulting optimization process jointly considers forecasting performance and  
communication-induced uncertainty.  
Model Training and Inference Workflow  
The training process follows a simulation-driven workflow in which communication impairments are injected  
into the input data prior to model training. For each training batch, network latency and packet loss are randomly  
sampled according to predefined distributions, and the corresponding impaired sequences are generated. The  
communication-aware LSTM is then trained using these sequences, enabling the model to generalize across a  
wide range of network conditions.  
During inference, the trained model receives real-time data streams affected by unknown communication  
conditions and produces short-term solar power forecasts. Due to its exposure to diverse network scenarios  
during training, the model maintains stable performance even when communication quality fluctuates.  
This communication-aware deep learning framework forms the core of the proposed approach and provides the  
basis for the experimental evaluation presented in the following sections.  
Figure 2 Model Training and Inference Workflow  
Experimental Setup  
This section describes the simulation-driven experimental setup used to evaluate the proposed communication-  
aware deep learning framework. The objective is to ensure reproducibility, realism, and fairness in assessing the  
performance of the forecasting models under varying communication conditions. All experiments are  
implemented using the PyTorch deep learning framework.  
Page 1000  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Dataset Description and Generation  
The experiments are conducted using time-series solar power generation data sampled at regular intervals,  
consistent with monitoring practices in intelligent power networks. Each data sample consists of historical solar  
power output and associated meteorological features, including solar irradiance, ambient temperature, and cloud  
coverage. To ensure reproducibility and flexibility, the simulation framework supports both real-world datasets  
and synthetically generated solar profiles that follow realistic diurnal and seasonal patterns.  
For each experiment, the data is segmented into fixed-length input sequences using a sliding window approach.  
Each sequence contains (T) consecutive time steps of historical observations, which are used to predict the solar  
power output at a future horizon (\tau). All features are normalized using minmax scaling based on the training  
dataset to prevent information leakage and to stabilize model training.  
The dataset is divided into training, validation, and testing subsets using an 801010 split. The training set is  
used for model optimization, the validation set for hyperparameter tuning and early stopping, and the testing set  
exclusively for performance evaluation.  
Communication Scenario Configuration  
To evaluate robustness under realistic network conditions, multiple communication scenarios are simulated.  
These scenarios model the effects of latency, packet loss, and asynchronous data arrival commonly observed in  
intelligent power networks.  
Latency is simulated as a random variable added to each data sample, representing end-to-end communication  
delay. Three latency regimes are considered: low-latency, moderate-latency, and high-latency conditions. Packet  
loss is modeled as a Bernoulli process, where each transmitted data sample has a fixed probability of being  
dropped. Packet loss rates ranging from 0% to 20% are evaluated to reflect varying network reliability.  
Bandwidth constraints are indirectly modeled by limiting the number of data samples that can be delivered within  
a given time window, resulting in delayed or missing observations. These impairments collectively generate  
communication-impaired input sequences that deviate from idealized, fully observed data streams.  
Model Configuration and Training Parameters  
Two forecasting models are evaluated: a communication-agnostic baseline LSTM and the proposed  
communication-aware LSTM. Both models share the same core architecture to ensure fair comparison. The  
LSTM networks consist of one or more recurrent layers followed by a fully connected output layer that produces  
the final solar power prediction.  
Models are trained using the Adam optimizer with a fixed learning rate. The mean squared error loss is used for  
the baseline model, while the communication-aware model employs a latency-weighted loss function to  
emphasize robustness under adverse network conditions. Training is performed for a fixed number of epochs  
with early stopping based on validation loss to prevent overfitting.  
Evaluation Metrics  
Model performance is evaluated using standard regression metrics commonly adopted in solar energy forecasting  
studies. These include Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Mean Absolute  
Percentage Error (MAPE), and the coefficient of determination (R²). These metrics provide complementary  
perspectives on forecasting accuracy and error distribution.  
In addition to forecasting accuracy, system-level performance is assessed by analyzing the relationship between  
prediction accuracy and communication latency. This accuracylatency trade-off provides insights into the  
practical deployment of deep learning forecasting models in intelligent power networks.  
Page 1001  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Implementation Environment  
All simulations and model training are implemented in Python using PyTorch. Experiments are conducted on a  
standard computing platform equipped with a modern CPU and optional GPU acceleration. The modular code  
structure allows easy modification of network parameters, model configurations, and dataset characteristics,  
enabling extensive experimentation and reproducibility.  
This experimental setup establishes a rigorous foundation for the results and performance evaluation presented  
in the next section.  
Results and Performance Evaluation  
This section presents the simulation-based results obtained from the proposed communication-aware deep  
learning framework. The performance of the communication-aware LSTM is compared against a  
communication-agnostic baseline under both ideal and impaired communication conditions. All results are  
generated using the PyTorch-based simulation framework described in Chapter 5.  
Forecasting Accuracy under Ideal Communication Conditions  
We first evaluate the forecasting performance under ideal communication conditions, where no latency or packet  
loss is present. This scenario represents the conventional assumption adopted in most existing solar forecasting  
studies and serves as a baseline reference.  
Under ideal conditions, both the baseline LSTM and the communication-aware LSTM achieve high prediction  
accuracy, indicating that the introduction of communication-awareness does not degrade performance when data  
transmission is reliable. The results confirm that the proposed model preserves the temporal learning capability  
of standard deep learning architectures.  
Table 1. Forecasting performance under ideal communication conditions  
Model  
MAE  
0.082  
0.079  
RMSE  
0.104  
0.101  
MAPE (%)  
R2  
Baseline LSTM  
6.8  
6.5  
0.94  
0.95  
Communication-Aware LSTM  
Table 2. RMSE under varying packet loss rates  
Packet Loss (%)  
Baseline LSTM RMSE  
Comm-Aware LSTM RMSE  
0
0.104  
0.118  
0.132  
0.158  
0.101  
0.109  
0.118  
0.134  
5
10  
20  
These results indicate comparable performance between the two models under ideal conditions, establishing a  
fair baseline for subsequent evaluations.  
Page 1002  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Impact of Communication Latency  
Next, the impact of communication latency on forecasting accuracy is examined. Three latency regimes are  
considered: low, moderate, and high latency. As latency increases, the forecasting model receives increasingly  
stale input data, which negatively affects temporal consistency.  
The baseline LSTM exhibits a noticeable degradation in performance as latency increases, reflecting its  
sensitivity to delayed inputs. In contrast, the communication-aware LSTM demonstrates improved robustness,  
maintaining lower error rates across all latency regimes.  
Figure 3 illustrates the relationship between latency and RMSE for both models.  
Figure 3. RMSE versus communication latency  
This plot illustrates how forecasting error (RMSE) increases with communication latency. The Communication-  
Aware LSTM demonstrates significantly better robustness compared to the Baseline LSTM as latency increases  
from 0.1s to 0.6s  
Figure 4: RMSE Heatmap under Communication Impairments  
This graphic illustrates the relationship between packet loss and latency on forecasting accuracy. Latency ranges  
from 0.1 to 0.6 seconds on the vertical axis, while packet loss spans from 0% to 20% on the horizontal axis.  
Each cell indicates the Root Mean Square Error (RMSE) for specific conditions. Despite both factors affecting  
performance, the model demonstrates resilience, with RMSE remaining stable at 0.148 even under the most  
challenging scenario of 0.6 seconds delay and 20% packet loss.  
Page 1003  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Impact of Packet Loss  
We further evaluate model performance under varying packet loss rates ranging from 0% to 20%. Packet loss  
introduces missing observations, which disrupt the temporal structure of the input sequences.  
The communication-agnostic baseline suffers significant performance degradation as packet loss increases. In  
contrast, the communication-aware LSTM maintains relatively stable accuracy due to its explicit handling of  
missing data through masking mechanisms.  
Table 3. RMSE under varying packet loss rates  
Packet Loss (%) Baseline LSTM RMSE Comm-Aware LSTM RMSE  
0.104  
0.118  
0.132  
0.158  
0.101  
0.109  
0.118  
0.134  
0
5
10  
20  
These findings confirm that communication-aware learning substantially improves robustness to data loss in  
networked forecasting environments.  
Comparison with Communication-Agnostic Models  
To quantify overall robustness, the performance gap between the two models is analyzed across all simulated  
communication scenarios. The communication-aware LSTM consistently outperforms the baseline model,  
particularly under adverse network conditions.  
Figure 5. Average RMSE comparison across communication  
As shown in the visualization, the Communication-Aware LSTM achieves a lower average RMSE of 0.118  
compared to the 0.130 recorded by the Baseline LSTM, representing an overall improvement in forecasting  
robustness across the simulated network conditions.  
Page 1004  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Figure 6: Error Distribution Comparison between Models  
AccuracyLatency Trade-Off Analysis  
Finally, the trade-off between forecasting accuracy and communication latency is analyzed from a system-level  
perspective. While increased latency generally degrades prediction accuracy, the proposed communication-  
aware model exhibits a slower performance decay, making it more suitable for real-time deployment in  
intelligent power networks.  
Figure 7. Accuracylatency trade-off analysis  
This analysis highlights the practical advantages of communication-aware deep learning for real-time solar  
energy forecasting in intelligent power networks.  
Page 1005  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Figure 8 Training Convergence Comparison of Forecasting Models  
The illustrates below shows the training convergence behavior of the baseline LSTM and the proposed  
communication-aware LSTM models. Both models exhibit stable learning dynamics, with training loss  
decreasing monotonically as the number of epochs increases. However, the communication-aware LSTM  
converges faster and reaches a lower steady-state loss compared to the communication-agnostic baseline. This  
behavior indicates that incorporating communication-related information does not hinder the optimization  
process and, instead, facilitates more efficient learning by enabling the model to better handle temporally  
impaired input data. The smoother convergence profile of the proposed model further suggests improved training  
stability, which is particularly important for real-time forecasting applications deployed in dynamic and  
communication-constrained power network environments.  
Figure 9 Training Convergence Comparison of Forecasting Models  
A. Explicit Research Contributions  
This paper makes the following key contributions:  
Communication-Aware Solar Forecasting Framework  
This work introduces a novel communication-aware deep learning framework for real-time solar energy  
forecasting that jointly considers power system dynamics and communication network impairments. Unlike  
Page 1006  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
conventional forecasting approaches that assume ideal data availability, the proposed framework explicitly  
models latency, packet loss, and asynchronous data arrival.  
Latency-Weighted Learning Mechanism  
A latency-weighted loss formulation is proposed to guide model training under non-ideal communication  
conditions. This mechanism enables the forecasting model to prioritize temporally reliable information, thereby  
improving robustness in delay-sensitive environments.  
End-to-End Simulation-Based Evaluation  
A fully reproducible PyTorch-based simulation framework is developed to evaluate forecasting performance  
under varying communication scenarios. The framework integrates data generation, network impairment  
modeling, deep learning training, and performance evaluation in a unified pipeline.  
System-Level AccuracyLatency Trade-Off Analysis  
The study provides a detailed system-level analysis of the trade-off between forecasting accuracy and  
communication latency, offering practical insights for deploying learning-based forecasting systems in  
intelligent power networks.  
B. Ablation Study: Impact of Communication Awareness  
To further validate the effectiveness of communication-aware learning, an ablation study is conducted by  
selectively disabling communication-related components of the proposed model. Three model variants are  
evaluated:  
Baseline LSTM: No communication information  
LSTM + Latency Input: Latency-aware input without loss reweighting  
Full Communication-Aware LSTM: Latency-aware input with latency-weighted loss  
Table 4. Ablation study results under moderate latency conditions.  
Model Variant  
MAE  
RMSE  
R2  
Baseline LSTM  
$0.115$ $0.137$ $0.89$  
$0.103$ $0.121$ $0.92$  
$0.094$ $0.110$ $0.94$  
LSTM + Latency Input  
Full Comm-Aware LSTM  
The results indicate that incorporating communication awareness incrementally improves forecasting accuracy.  
The full communication-aware model achieves the best performance, confirming the necessity of jointly  
modeling latency at both the input and loss levels.  
C. Model Complexity and Computational Overhead Analysis  
To address concerns regarding scalability and real-time deployment, the computational complexity of the  
proposed model is analyzed. The communication-aware LSTM shares the same core architecture as the baseline  
model, resulting in minimal additional parameter overhead.  
The primary computational cost arises from the recurrent operations of the LSTM layers, with time complexity  
2
̂
proportional to . 퐻 , where is the sequence length and is the hidden state dimension. The inclusion of  
Page 1007  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
communication features introduces negligible additional cost, as latency and packet loss indicators are low-  
dimensional inputs.  
Table 5. Model complexity comparison.  
Model  
Trainable  
(Millions)  
Parameters Training Time per Epoch Inference  
Latency  
(s)  
(ms)  
1.24  
1.26  
18.3  
18.9  
4.7  
Baseline LSTM  
5.0  
Comm-Aware  
LSTM  
The results demonstrate that the proposed communication-aware model incurs only marginal computational  
overhead while delivering substantial robustness gains. This makes it suitable for deployment in latency-  
sensitive smart grid environments.  
D. Threats to Validity and Robustness Discussion  
Several potential threats to validity are acknowledged. First, the communication impairments are simulated using  
statistical models, which may not capture all real-world network dynamics. Second, the experiments focus on  
short-term forecasting horizons, and performance may vary for longer prediction windows. Despite these  
limitations, the consistency of results across multiple communication scenarios and evaluation metrics supports  
the robustness of the proposed approach.  
Limitations and Practical Challenges  
Interoperability with Legacy Power Systems  
Despite the effectiveness of the proposed communication-aware deep learning framework, interoperability with  
existing power system infrastructure remains a challenge. Many operational solar plants rely on legacy SCADA  
systems and heterogeneous manufacturer protocols, such as IEC 61850, which were not originally designed to  
support AI-driven decision-making pipelines. Integrating advanced learning models into these environments  
may require additional middleware, protocol translation layers, or system reconfiguration, potentially increasing  
deployment complexity and operational costs.  
Cybersecurity and Communication Vulnerabilities  
While incorporating communication state information enhances forecasting robustness, it also introduces new  
attack surfaces. Communication-aware models may be more susceptible to adversarial threats such as data  
poisoning, spoofed network metrics, or signal jamming attacks that distort input communication features.  
Without appropriate safeguards, these attacks could degrade model performance or induce unsafe operational  
decisions. Future deployments must therefore integrate secure communication channels, anomaly detection  
mechanisms, and robust learning techniques to mitigate such risks.  
Computational Overhead and Edge Deployment Constraints  
The deployment of deep learning models at the network edgeclose to solar generation unitsraises concerns  
regarding computational efficiency and energy consumption. Although the proposed model introduces only  
marginal overhead in simulation, real-world edge devices often operate under strict power and hardware  
constraints. Running complex deep learning models may necessitate specialized low-power AI accelerators or  
model compression techniques, such as pruning or quantization, to ensure feasibility in large-scale, distributed  
solar installations.  
Page 1008  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Practical Implications and Research Outlook  
Communication-aware deep learning is rapidly transitioning from a primarily theoretical research concept to a  
practical requirement for Smart Grid 2.0 architectures. By explicitly modeling the communication link as a  
dynamic and uncertainty-aware variable rather than a static assumption, such approaches enable forecasting  
systems to remain reliable under real-world network impairments. This paradigm shift is particularly critical for  
solar energy integration, where inherent intermittency is compounded by communication delays and packet loss  
in distributed power networks. Treating communication conditions as first-class model inputs allows intelligent  
forecasting systems to improve robustness, operational resilience, and grid stability, positioning solar energy as  
a dependable backbone for next-generation intelligent power networks.  
Conclusion and Future Work  
This paper investigated the prblem of real-time solar energy forecasting in intelligent power networks under  
realistic communication constraints. Unlike conventional deep learningbased forecasting approaches that  
assume ideal data transmission, this study explicitly considered the impact of communication latency, packet  
loss, and asynchronous data arrival on forecasting performance. By jointly modeling communication  
impairments and learning processes, a communication-aware deep learning framework was proposed to improve  
robustness and reliability in networked forecasting environments.  
A communication-aware LSTM-based forecasting model was developed and evaluated using a fully simulation-  
driven experimental framework implemented in PyTorch. The proposed approach integrates communication-  
related information into the input representation and learning objective, enabling the model to adapt to non-ideal  
network conditions. Extensive simulation results demonstrated that the communication-aware model achieves  
forecasting accuracy comparable to conventional models under ideal communication conditions, while  
significantly outperforming communication-agnostic baselines when latency and packet loss are present.  
The experimental analysis further revealed that communication impairments can substantially degrade the  
performance of traditional forecasting models, particularly in high-latency and high packet loss scenarios. In  
contrast, the proposed framework exhibited improved resilience and a more favorable accuracylatency trade-  
off, highlighting its suitability for real-time deployment in intelligent power networks. These findings emphasize  
the importance of jointly considering communication and learning aspects when designing data-driven  
forecasting systems for smart grids.  
Despite its effectiveness, this study has several limitations that suggest directions for future research. First, the  
communication model employed in this work is based on statistical simulation and does not capture all  
characteristics of real-world communication infrastructures. Future studies may integrate detailed network  
simulators or real communication traces to further enhance realism. Second, the forecasting model focuses on  
short-term prediction using a single deep learning architecture. Exploring alternative architectures, such as  
attention-based Transformers or graph neural networks, may yield additional performance improvements.  
Future work may also investigate adaptive and learning-based communication strategies, where network  
resources are dynamically allocated based on forecasting requirements. Reinforcement learning techniques could  
be employed to jointly optimize communication and forecasting policies in real time. In addition, extending the  
framework to multi-source and geographically distributed power systems, as well as validating the proposed  
approach using real-world smart grid deployments, represents an important step toward practical adoption.  
In summary, this paper demonstrates that communication-aware deep learning provides a promising and  
practical solution for real-time solar energy forecasting in intelligent power networks. By bridging the gap  
between deep learning and communication modeling, the proposed framework contributes toward more reliable,  
resilient, and intelligent renewable energy systems.  
Page 1009  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
REFERENCES  
1. H. Zhang, J. Zhao, and Y. Li, “Short-term solar power forecasting using deep learning: A review,”  
Renewable and Sustainable Energy Reviews, vol. 156, pp. 118, 2022.  
2. A. Mellit, A. Massi Pavan, and V. Lughi, “Advanced methods for photovoltaic output power forecasting:  
A review,” Applied Energy, vol. 288, pp. 120, 2021.  
3. Y. Wang, Q. Chen, T. Hong, and C. Kang, “Review of smart meter data analytics: Applications,  
methodologies, and challenges,” IEEE Transactions on Smart Grid, vol. 10, no. 3, pp. 31253148, 2019.  
4. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp.  
17351780, 1997.  
5. A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,”  
IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 66456649, 2013.  
6. K. Greff et al., “LSTM: A search space odyssey,” IEEE Transactions on Neural Networks and Learning  
Systems, vol. 28, no. 10, pp. 22222232, 2017.  
7. J. Lago, F. De Ridder, and B. De Schutter, “Forecasting spot electricity prices: Deep learning approaches  
and empirical comparison,” Energy, vol. 221, pp. 113, 2021.  
8. T. Hong and S. Fan, “Probabilistic electric load forecasting: A tutorial review,” International Journal of  
Forecasting, vol. 32, no. 3, pp. 914938, 2016.  
9. X. Liu, Z. Lin, and J. Wen, “Short-term photovoltaic power forecasting based on LSTM neural network,”  
Energy Procedia, vol. 158, pp. 521526, 2019.  
10. A. Vaswani et al., “Attention is all you need,” Advances in Neural Information Processing Systems, vol.  
30, pp. 59986008, 2017.  
11. Z. Wu et al., “A comprehensive survey on graph neural networks,” IEEE Transactions on Neural  
Networks and Learning Systems, vol. 32, no. 1, pp. 424, 2021.  
12. M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial neural networks-based machine  
learning for wireless networks: A tutorial,” IEEE Communications Surveys & Tutorials, vol. 21, no. 4,  
pp. 30393071, 2019.  
13. N. Kato et al., “Ten challenges in advancing machine learning technologies toward 6G,” IEEE Wireless  
Communications, vol. 27, no. 3, pp. 96103, 2020.  
14. A. Goldsmith, Wireless Communications, Cambridge University Press, 2005.  
15. M. Chiang et al., “Fog and IoT: An overview of research opportunities,” IEEE Internet of Things Journal,  
vol. 3, no. 6, pp. 854864, 2016.  
16. Y. Mao, C. You, J. Zhang, K. Huang, and K. Letaief, “A survey on mobile edge computing,” IEEE  
Communications Surveys & Tutorials, vol. 19, no. 4, pp. 23222358, 2017.  
17. L. Yang, J. Zhang, and H. Poor, “Delay-aware learning for communication-efficient edge intelligence,”  
IEEE Transactions on Wireless Communications, vol. 20, no. 12, pp. 114, 2021.  
18. J. Chen and X. Ran, “Deep learning with edge computing: A review,” Proceedings of the IEEE, vol. 107,  
no. 8, pp. 16551674, 2019.  
19. M. Mohammadi et al., “Deep learning for IoT big data and streaming analytics: A survey,” IEEE  
Communications Surveys & Tutorials, vol. 20, no. 4, pp. 29232960, 2018.  
20. R. Deng et al., “Communication-aware energy management in smart grids,” IEEE Transactions on Smart  
Grid, vol. 11, no. 2, pp. 112, 2020.  
21. J. Zhao, F. Wen, Z. Dong, Y. Xue, and K. Wong, “Optimal dispatch of electric vehicles and wind power  
using enhanced particle swarm optimization,” IEEE Transactions on Industrial Informatics, vol. 8, no. 4,  
pp. 889899, 2012.  
22. A. Ahmad et al., “A review on applications of ANN and deep learning in smart grids,” Renewable and  
Sustainable Energy Reviews, vol. 136, pp. 117, 2021.  
23. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” IEEE Conference  
on Computer Vision and Pattern Recognition, pp. 770778, 2016.  
24. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” International Conference on  
Learning Representations, 2015.  
25. P. Pinson, “Very short-term probabilistic forecasting of wind power,” IEEE Transactions on Power  
Systems, vol. 27, no. 2, pp. 877885, 2012.  
Page 1010  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
26. S. Makridakis, E. Spiliotis, and V. Assimakopoulos, “Statistical and machine learning forecasting  
methods: Concerns and ways forward,” PLOS ONE, vol. 13, no. 3, pp. 126, 2018.  
27. A. Botterud et al., “Demand flexibility and renewable energy integration,” IEEE Proceedings, vol. 108,  
no. 9, pp. 115, 2020.  
28. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436444, 2015.  
29. IEEE Task Force on Smart Grid Communications, “IEEE standard for smart grid interoperability,” IEEE  
Std 2030, 2011.  
30. F. Hutter, L. Kotthoff, and J. Vanschoren, Automated Machine Learning, Springer, 2019.  
Page 1011