0% found this document useful (0 votes)
5 views9 pages

ISAS_2024_SolarForecast_BiLSTM_Attn - submitted

This study investigates the use of a Bi-directional Long Short-Term Memory (Bi-LSTM) model enhanced with an attention mechanism for solar power forecasting in Vietnam. The proposed model is compared to traditional forecasting methods, demonstrating improved accuracy by effectively capturing complex temporal dependencies in solar power data. The research utilizes real-world datasets and emphasizes the importance of accurate solar energy forecasting for grid stability and energy market decisions.

Uploaded by

Tai Ngo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

ISAS_2024_SolarForecast_BiLSTM_Attn - submitted

This study investigates the use of a Bi-directional Long Short-Term Memory (Bi-LSTM) model enhanced with an attention mechanism for solar power forecasting in Vietnam. The proposed model is compared to traditional forecasting methods, demonstrating improved accuracy by effectively capturing complex temporal dependencies in solar power data. The research utilizes real-world datasets and emphasizes the importance of accurate solar energy forecasting for grid stability and energy market decisions.

Uploaded by

Tai Ngo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Exploring The Efficiencies Of Bi-LSTM Models In Solar

Power Forecasting In Vietnam

Le Thi Tinh Minh1 , Ngo Tri Si2 , and Nguyen Thi Kim Truc3∗

1
Faculty of Electrical and Electronics Engineering, Ho Chi Minh University of
Technology (HCMUT), VNU-HCM, Ho Chi Minh City, Vietnam
2
Faculty of Information Technology, Ho Chi Minh University of Science, VNU-HCM,
Ho Chi Minh City, Vietnam
3
Faculty of Electrical Engineering, University of Science and Technology, UD, Da Nang
City, Vietnam

E-mail: le.thitinhminh@hcmut.edu.vn; ntsi22@apcs.fitus.edu.vn, ntktruc@dut.udn.vn



Corresponding author: Nguyen Thi Kim Truc, ntktruc@dut.udn.vn

Abstract. Solar energy, as a pivotal renewable resource, holds the potential to ad-
dress global energy demands while mitigating the environmental impact of traditional
energy sources. Accurate forecasting of solar energy output is essential for efficient power
system management, ensuring grid stability, and facilitating informed decision-making
in energy markets. Recent research has highlighted the integration of attention mech-
anisms into Long Short-Term Memory (LSTM) models, demonstrating their superior
predictive capabilities in this domain. This study contributes to the field by exploring a
Bi-directional LSTM (Bi-LSTM) model augmented with an attention mechanism, incor-
porating three time-series features for enhanced solar energy forecasting. The proposed
predictive model is empirically compared to multi-variable time series models, revealing
the efficiencies of the Bi-LSTM. The results are extensively evaluated on real solar power
datasets in Vietnam and overseas.

Keywords— Renewable energy, photovoltaic power forecasting, solar power forecasting, long short-term
memory, Bi-LSTM model

1 Introduction
In recent years, the global energy industry has increasingly focused on reducing carbon emissions by transi-
tioning to renewable energy sources, driven by the need to mitigate the adverse effects of excessive emissions on
climate change and global warming. The surging demand for energy, coupled with diminishing non-renewable
sources like coal, natural gas, and petroleum, has prompted countries to adopt forward-looking energy policies.
For instance, in 2015, the USA and China committed to a full transition to renewable energy, while the European
Union has set ambitious targets for 30% renewable electricity by 2030 and 100% by 2050 [1].
Vietnam has also strategically shifted towards renewable energy development, with significant growth in wind
and solar sectors. The nation’s installed solar capacity has reached approximately 19,400 MWp, with rooftop
solar contributing nearly 9,300 MWp, constituting around 25% of the total capacity in the national grid. The
Draft Power Plan 8 projects a further increase, with an estimated 22,040 MW of solar capacity by 2030 and 63,640
MW by 2045 [2].
While solar energy is one of the most widely utilized renewable resources, its intermittent nature poses
challenges for grid integration. Accurate forecasting of solar irradiance and photovoltaic (PV) power output
is critical for improving power system management and ensuring grid stability. Traditional statistical models,
such as ARIMA and SARIMA, have been widely used for short-term power forecasting. However, these models
often struggle to capture the complex, nonlinear relationships between factors like weather patterns and power
generation, leading to reduced accuracy in forecasting renewable energy sources like solar and wind power [3, 4].
The advancement of machine learning has provided more effective tools for solar power forecasting. Techniques
such as artificial neural networks (ANNs), support vector machines (SVMs), and decision trees have demonstrated
improved accuracy by capturing the complex, nonlinear dynamics of power data [5]. However, the emergence of
deep learning models, particularly Long Short-Term Memory (LSTM) networks, has brought about significant
improvements in forecasting accuracy. LSTM models are well-suited for time-series data due to their ability to
capture long-term dependencies, making them a popular choice for renewable energy forecasting [6].
Despite their success, LSTM models still face limitations in capturing bidirectional dependencies in sequential
data. This has led to the development of Bi-directional LSTM (Bi-LSTM) models, which process information in
both forward and backward directions, improving their ability to understand complex temporal relationships. Fur-
thermore, the integration of attention mechanisms into LSTM models has been shown to enhance their predictive
performance by selectively focusing on the most relevant parts of the input sequence. The attention mechanism
enables the model to weigh the importance of different time steps, resulting in improved accuracy for solar power
forecasting tasks [7].
In this work, we introduce a Bi-LSTM model augmented with an attention mechanism (BiLSTM-Attn) to
address the limitations of traditional LSTM models. By leveraging the attention mechanism, our model dynami-
cally learns the significance of different time steps, enabling more accurate predictions of solar power output. We
compare the performance of the BiLSTM-Attn model against other recurrent models, including vanilla LSTM,
GRU, and standard Bi-LSTM models, using real-world solar power datasets from Vietnam and abroad. Our
results demonstrate that the BiLSTM-Attn model outperforms the other models in terms of predictive accuracy,
making it a valuable tool for solar energy forecasting.
The remainder of this paper is organized as follows: Section 2 describes the datasets used and outlines the
proposed BiLSTM-Attn model, including data preprocessing steps. Section 3 details the experimental setup,
model training, and provides a thorough analysis of the results. Finally, Section 4 presents the conclusion and
discusses future research directions.

2 Database
This paper utilizes two datasets:

• The first dataset (AT dataset) encompasses solar power generation data in Austria, collected at one-hour
intervals from January 1, 2015, to September 30, 2020, at various solar power plants across the country.
This data has been preprocessed.
• The second dataset (TNSPP dataset) pertains to solar power generation at the Trung Nam solar power
plant, located in Loi Hai and Bac Phong, Thuan Bac, Ninh Thuan, Vietnam. This dataset is collected at
1-minute intervals from January 1, 2020, to May 31, 2021. The data includes the total AC output power
of 45 inverters, each with a rated power of 4.45 MW, at the Trung Nam solar power plant. Additionally,
the dataset includes radiation measurements from 6 selected inverters, providing an average radiation value
each day. It is essential to note that, given the nature of solar power generation forecasting, only data from
5:30 am to 6:00 pm daily has been utilized in this paper.

3 Proposed Solar Power Forecasting Framework


The study presents a short-term solar power forecasting task aimed at predicting solar power output for the
subsequent time step. For the AT dataset, this involves forecasting the next hour, while for the TNSPP dataset,
it forecasts the next minute. The task utilizes two primary input variables: historical solar power data and solar
radiation data, with a total of 360 historical samples. The goal is to accurately predict solar power output for the
test set.
The flowchart in Figure 1 illustrates the solar power forecasting process in detail. It begins with the collection
Figure 1: The Proposed Solar Power Forecasting Flowchart

of data from solar power and radiation sources as described in Section 3. Data preprocessing is then performed
to clean the dataset, ensuring the removal of anomalies or missing values.
Next, the model building phase involves selecting an appropriate forecasting model architecture. This model
is trained using historical data to understand patterns between input variables and the target output. The final
step involves evaluating the model’s performance in predicting solar power for the next time step, which helps
refine and improve forecasting accuracy.

3.1 Data Pre-processing


Before utilizing the collected data, it is essential to perform through checks and pre-processing to ensure
its quality and suitability for training a machine learning algorithm. This involves tasks such as data cleaning,
removal of irrelevant entries, and transformation to a consistent format.
Handling Missing Values: To handle the missing data, it’s important to first understand the potential causes.
In PV solar generator plants, missing data can be caused by several factors, such as communication failures
between inverters and data loggers, power outages, equipment malfunctions, scheduled maintenance, or sensor
issues. Extreme weather conditions or environmental damage to cables and devices may also result in data gaps.
Additionally, software bugs, misconfigurations, or system updates may lead to periods of missing data.
To address these missing values, various approaches are available, such as deletion [8], mean/median substi-
tution [9], Last Value Carried Forward (LVCF) [10], and interpolation such as Dynamic Time Warping-Based
Imputation (DTWBI) or Extended Dynamic Time Warping-Based Imputation (eDTWBI) [11], which are ad-
vanced methods used for handling missing data in time-series datasets, especially those with trends and seasonal
characteristics. According to [11], the interpolation method yields superior forecasting results for time series
datasets that exhibit both trending and seasonal characteristics. Therefore, the primary method for handling
missing values in this study is the DTWBI interpolation method.
Removing Outliers: In this paper, proximity-based models, specifically the Density-Based Spatial Clustering
of Applications with Noise (DBSCAN) method [12], are employed to detect the outliers. This decision is driven
by its advantages in achieving low error rates and relative simplicity. After outliers are detected, they are replaced
with new values obtained from the interpolation method. Figure 2 shows the outliers in the TNSPP dataset and
the results after data preprocessing.
Data Transformation: Data transformation is an extremely crucial step in machine learning in general and
artificial intelligence in particular. This is because the input variables of an artificial neural network model often
require small values, which can either be within the range of 0 to 1 (Data Normalization) or normalized with a
mean value of 0 and a standard deviation of 1 (Data Standardization). In this paper, the data is transformed
using the data normalization method through the following formula:

x − xmin
y= (1)
xmax − xmin
where x, y, xmin , and xmax represent an instance value, its xmin − xmax normalized value, the dataset
minimum value, and the dataset maximum value, respectively.
(a) Before Removing Outliers (b) After Removing Outliers

Figure 2: Outliers in the the Trung Nam solar power plant TNSPP Dataset.

3.2 Proposed Model Architecture


This paper investigates the application of a Bidirectional Long Short-Term Memory (BiLSTM) network aug-
mented with an attention mechanism for solar power forecasting. The primary objective is to evaluate the model’s
effectiveness in predicting solar power output from time-series data, utilizing critical components such as BiLSTM,
attention mechanisms, and a carefully designed model architecture.
The BiLSTM network represents a significant advancement in sequence processing by analyzing input data
in both forward and backward directions. This bidirectional approach enables the model to capture temporal
dependencies from both past and future contexts, which is essential for enhancing forecasting accuracy. In the
context of solar power forecasting, where the interplay between historical and future data points is crucial, the
BiLSTM network demonstrates clear advantages.
Integrating an attention mechanism into the BiLSTM framework provides several benefits. First, it enhances
the model’s focus by allowing it to dynamically concentrate on the most relevant parts of the input sequence,
which is particularly useful for managing long-term dependencies where certain time steps are more influential
than others. Second, the attention mechanism improves the model’s interpretability by indicating which portions
of the input data are most critical for making predictions. This transparency aids in understanding the model’s
decision-making process and in identifying potential issues. Finally, the attention mechanism effectively handles
long sequences by alleviating the common problem of information dilution, ensuring that important information
is preserved and utilized efficiently.
The model architecture is specifically designed to optimize the effectiveness of these components. It is built
for a multivariate forecasting problem, utilizing three input sequences: time (hour-wise), solar radiation, and
generator power. The input data is organized into a shape of (number of samples, number of historical data,
number of features) i.e., (512, 360, 3) and fed into a BiLSTM layer with 128 neurons, which processes the input
sequence bidirectionally to capture temporal dependencies effectively. The output from the BiLSTM is then
directed to an attention layer that computes attention scores and weights to emphasize relevant time steps. This
weighted output is subsequently passed to a fully connected layer with 128 neurons, culminating in a final output
layer containing a single neuron that predicts the solar power output for the next time step. This architecture
capitalizes on the strengths of both LSTM networks and attention mechanisms to enhance prediction accuracy
by considering contexts from both past and future time steps while highlighting important features in the input
data.
The attention layer in this model is crafted to enhance the focus on significant features within the hidden
states during sequence-based learning. It receives as input the hidden state, which combines the outputs from
the forward and backward LSTMs (i.e., hidden size ×2). Initially, a linear transformation is applied to compute
attention scores, followed by a non-linear activation function (tanh) to ensure smoother gradients. These scores
are normalized using a softmax function to generate attention weights that reflect the importance of each hidden
state in the final representation. The context vector, representing the weighted sum of the hidden states, is
computed through element-wise multiplication of the attention weights and the hidden states. This mechanism
Figure 3: Proposed model architecture using BiLSTM and attention layer, along with the unrolled
BiLSTM-attention mechanism.

enables the model to selectively emphasize relevant information, thereby enhancing its interpretability and overall
performance in sequence prediction tasks.
Figure 3 illustrates our proposed architecture, which seamlessly integrates the strengths of BiLSTM networks
and attention mechanisms, alongside a depiction of the mechanism of a single BiLSTM neuron combined with
the attention mechanism.

3.3 Model Training


In the model training process, the dataset is critically divided to ensure the forecasting model’s effectiveness.
First, the preprocessed data is split into two distinct subsets: 80% of the data is allocated to the training set, and
the remaining 20% forms the testing set. This division is essential for allowing the model to learn patterns and
relationships from the training data while preserving a portion of the data for evaluation. The testing set is also
used as a validation set during training to monitor the model’s generalization ability and prevent overfitting.
The training set is used to build and fine-tune the forecasting models. During this phase, the model learns
underlying temporal patterns from the historical solar power and radiation data through backpropagation and
optimization techniques, such as the Adam optimizer. The model iteratively adjusts its internal weights to
minimize the error between the predicted and actual values. With each epoch, the model’s predictions become
progressively more accurate as it learns to capture the complex relationships between input features, including
time (hour), solar radiation, and generator power.
During training, the model processes input sequences of length 360, representing historical data from the past
15 days for AT dataset and the past 6 hours for TNSPP dataset, allowing it to identify both short-term and
long-term temporal dependencies. Evaluation metrics such as Mean Absolute Error (MAE), Root Mean Square
Error (RMSE), and Mean Absolute Percentage Error (MAPE) are employed throughout the training process to
assess the model’s predictive power. These metrics are computed for both the training and validation sets after
each epoch, enabling continuous monitoring of the model’s performance.
To enhance the robustness of the model, early stopping is employed to halt training when the performance on
the validation set plateaus or deteriorates, preventing overfitting. The model is also trained with dropout regular-
ization to mitigate overfitting by randomly deactivating neurons during training, ensuring the model generalizes
well to unseen data. This combination of techniques ensures that the trained model not only fits the training
data but also performs reliably when applied to new, unseen data, making it suitable for real-world solar power
forecasting applications.

3.4 Model Evaluation


To assess the performance of the methods, two error measures, Root Mean Square Error (RMSE) and Mean
Absolute Percentage Error (MAPE), was utilized in this study. These are two of several common parameters
used to gauge accuracy for continuous variables. In addition, Mean Absolute Error (MAE), Mean Square Error
(MSE), and Relative Root Mean Square Error (rRMSE) are also considered.
The RMSE, MAPE, MSE, MAE and rRMSE are calculated as follows:
Root Mean Square Error (RMSE):
s
PN
j=1 (yj − ŷj )2
RM SE =
N
Mean Absolute Percentage Error (MAPE) [13]:
M AE
M AP E =

Mean Square Error (MSE):
PN
j=1 (yj − ŷj )2
M SE =
N
Mean Absolute Error (MAE):
PN
j=1 |yj − ŷj |
M AE =
N
Relative Root Mean Square Error (RMSE):
RM SE
rRM SE =

where yj represents the real value of the jth sample, ŷj is the predicted value for the jth sample, ȳ is the average
measured value during the forecasting period, and N is the number of samples used for evaluation, specifically
the number of samples in the test set.

4 Experiments and Analysis


4.1 Experiment Setup
The solar forecasting model in this study is configured with key parameters to optimize its predictive capabili-
ties. The model utilizes the Adam optimizer with a batch size of 512 and is trained over 400 epochs. It consists of
two LSTM layers in each direction (forward and backward), each containing 128 neurons, and employs a dropout
value of 0.2 to mitigate overfitting. The learning rate is set at 0.001, balancing the model’s learning efficiency
while maintaining training stability.
The input features for the model include three sequences: time (hour), solar radiation, and generator power.
The length of each sequence fed into the model is 360 time steps. However, the sampling rates differ between
the two datasets. For the TNSPP dataset, which is sampled at 1-minute intervals, each hour contains 60 data
samples, resulting in a higher resolution dataset. In contrast, the AT dataset is sampled at hourly intervals,
meaning each hour corresponds to a single data sample. This difference in sampling frequency affects how the
model processes the data, with the TNSPP dataset offering fine-grained, high-frequency information, while the
AT dataset provides broader, less granular time series data.
In this study, the same architecture used for the proposed BiLSTM-Attn model is applied to other model
variants, such as LSTM, GRU, and standard BiLSTM, to ensure a fair comparison. This consistency in archi-
tecture allows us to isolate the impact of the attention mechanism and bidirectional processing in BiLSTM-Attn,
while maintaining the same number of layers, neurons, and hyperparameters across all models. This approach
ensures that performance differences can be attributed to the model’s inherent capabilities rather than variations
in architecture design.
These parameters were selected to ensure the model can effectively learn patterns in solar power output,
taking into account the varying levels of data granularity between the two datasets.

4.2 Prediction Results and Analysis


Visualization of One-Week Forecasting Results
In Figure 4a, the TNSPP dataset, sampled at 1-minute intervals, reveals detailed fluctuations in solar generator
power due to factors like cloud coverage or sunlight variations. The model captures these trends reasonably well,
though small deviations suggest some difficulty in handling short-term volatility. This high-frequency data is
important for real-time applications like grid management, but requires improved noise handling for more precise
short-term forecasting.
(a) Comparison graph between actual and pre- (b) Comparison graph between actual and pre-
dicted power values on the TNSPP dataset. dicted power values on the AT dataset.

Figure 4: Forcasting results in one week.

Model MAE MAPE (%) RMSE rRMSE (%)


GRU 7.15 4.62 13.95 9.03
LSTM 7.9 5.11 15.489 10.02
BiLSTM 9.36 6.05 15.26 9.88
GRU-Attn 7.5639 4.8387 14.6015 9.3406
LSTM-Attn 7.3969 4.7319 14.2588 9.1215
BiLSTM-Attn 7.00 4.48 13.47 8.62

Table 1: Compare the test errors of the models with the AT dataset.

In contrast, Figure 4b, with hourly-sampled AT data, shows a smoother and more stable power generation
pattern. The model accurately predicts power output, with nearly perfect alignment between predicted and actual
values. This suggests that the reduced noise and more regular patterns in hourly data make it easier for the model
to forecast. Hourly sampling is ideal for longer-term planning and forecasting, where daily or weekly trends are
the focus.
The model performs well across both datasets, with room for improvement in handling short-term fluctuations
in high-resolution data like the TNSPP dataset. For long-term forecasts, the model excels, as seen with the AT
dataset, highlighting the impact of data resolution on forecasting accuracy.
Comprehensive Evaluation on the Full Test Set

Model MAE MAPE (%) RMSE rRMSE (%)


GRU 1483.07 1.38 3531.02 3.29
LSTM 1447.88 1.35 3438.64 3.21
BiLSTM 1485.89 1.39 3548.674 3.31
LSTM-Attn 1481.63 1.38 3398.13 3.18
GRU-Attn 1517.65 1.42 3470.78 3.24
BiLSTM-Attn 1433.51 1.34 3429.00 3.20

Table 2: Compare the test errors of the models with the TNSPP dataset.

Tables 1 and 2 present the test error metrics for different models applied to the AT and TNSPP datasets,
respectively. The BiLSTM-Attention (BiLSTM-Attn) model consistently shows superior performance across all
metrics, including MAE, MAPE, RMSE, and rRMSE. For the AT dataset, the BiLSTM with attention mechanism
achieves the lowest errors, with a MAE of 7.0, MAPE of 4.48%, RMSE of 16.47, and rRMSE of 8.62%. This
demonstrates its ability to capture complex temporal patterns more effectively compared to other models, such
as GRU, LSTM, and standard BiLSTM.
Similarly, in the TNSPP dataset, the BiLSTM-Attention model maintains its leading performance with the
lowest test errors such as MAE of 1433.51, MAPE of 1.34%, while LSTM-Attention model achieves the best
performance with RMSE of 3398.13, and rRMSE of 3.18%. This shows its adaptability and robustness across
different datasets, further highlighting its suitability for time series forecasting tasks. Models like GRU and LSTM
show competitive results, but consistently underperform in comparison to the BiLSTM-Attention, especially in
reducing error metrics across both datasets.
Future directions in forecasting research could focus on improving model accuracy by exploring more advanced
algorithms and techniques that can capture complex patterns and relationships in data. Incorporating additional
data sources, handling uncertainty, and developing more adaptable models that can generalize across different
forecasting tasks are also promising areas. Furthermore, enhancing the computational efficiency of forecasting
models for real-time applications and addressing challenges like data sparsity, missing values, and irregular time
intervals would broaden their use in various fields. Finally, improving interpretability and explainability of
forecasting models will be crucial for gaining insights and building trust in their predictions.

5 Conclusion
This study highlights the effectiveness of the BiLSTM-Attn model in solar power forecasting, demonstrating
how the attention mechanism significantly enhances prediction accuracy. By allowing the model to focus on the
most relevant temporal features, the attention mechanism enables BiLSTM-Attn to consistently outperform other
methods across various datasets, including both high-frequency and hourly-sampled data. The model’s strong
performance, reflected in its low error metrics such as MAE, MAPE, RMSE, and rRMSE, confirms its ability to
manage the complexities and variabilities inherent in solar power data, making it a robust solution for practical
renewable energy forecasting applications.
These results emphasize the model’s adaptability to diverse time series patterns, proving its value for both
short-term and long-term forecasting. Future research should focus on incorporating additional factors, such
as weather conditions and real-time data, to further improve accuracy. Enhancing the model’s computational
efficiency for real-time applications and addressing challenges like missing data and outlier detection will also be
essential for broader adoption in solar power management systems. By addressing these aspects, the BiLSTM-Attn
model can play a pivotal role in optimizing renewable energy forecasting and improving grid management.

Acknowledgement
We acknowledge Ho Chi Minh City University of Technology (HCMUT), VNU-HCM for supporting this study.
We also thank Mr Nguyen Duc Huy, Mr. Ngo Hoang Bach and Mr. Dang Tran Hoang for their contributions.
Additionally, we extend our sincere thanks to Mr. Nguyen Duc Huy, Mr. Ngo Hoang Bach, and Mr. Dang Tran
Hoang for their valuable contributions to this research.

References
[1] Utpal Kumar Das, Kok Soon Tey, Mehdi Seyedmahmoudian, Saad Mekhilef, Moh Yamani Idna Idris, Willem
Van Deventer, Bend Horan, and Alex Stojcevski. Forecasting of photovoltaic power generation and model
optimization: A review. Renewable and Sustainable Energy Reviews, 81:912–928, 2018.
[2] Tran Huynh Ngc and Le Thanh Nghi. Tich hop dien mat troi vao luoi dien - cac anh huong, nguy co va giai
phap khac phuc. PECC2, 2021.
[3] Yukta Mehta, Rui Xu, Benjamin Lim, Jane Wu, and Jerry Gao. A review for green energy machine learning
and ai services. Energies, 16(15):5718, 2023.
[4] Yuan-Kang Wu, Cheng-Liang Huang, Quoc-Thang Phan, and Yuan-Yao Li. Completed review of various
solar power forecasting techniques considering different viewpoints. Energies, 15(9):3320, 2022.
[5] Syed Altan Haider, Muhammad Sajid, Hassan Sajid, Emad Uddin, and Yasar Ayaz. Deep learning and
statistical methods for short-and long-term solar irradiance forecasting for islamabad. Renewable Energy,
198:51–60, 2022.
[6] Sheng-Xiang Lv and Lin Wang. Deep learning combined wind speed forecasting with hybrid time series
decomposition and multi-objective parameter optimization. Applied Energy, 311:118674, 2022.
[7] Elham M Al-Ali, Yassine Hajji, Yahia Said, Manel Hleili, Amal M Alanzi, Ali H Laatar, and Mohamed Atri.
Solar energy production forecasting based on a hybrid cnn-lstm-transformer model. Mathematics, 11(3):676,
2023.
[8] John W Graham. Missing data analysis: Making it work in the real world. Annual review of psychology,
60(1):549–576, 2009.
[9] Fiona M Shrive, Heather Stuart, Hude Quan, and William A Ghali. Dealing with missing data in a multi-
question depression scale: a comparison of imputation methods. BMC medical research methodology, 6:1–10,
2006.
[10] Garrett M Fitzmaurice, Nan M Laird, and James H Ware. Applied longitudinal analysis. John Wiley & Sons,
2012.
[11] T. Phan. So sanh mot so phuong phap xu ly du lieu thieu cho chuoi du lieu thoi gian 1 chieu. Tap Chi Khoa
hoc Nong Nghiep Viet Nam, 2020.
[12] Harshada C Mandhare and SR Idate. A comparative study of cluster based outlier detection, distance
based outlier detection and density based outlier detection techniques. In 2017 international conference on
intelligent computing and control systems (ICICCS), pages 931–935. IEEE, 2017.
[13] Cuc Dieu Tiet Dien Luc. Quyet dinh ban hanh quy trinh du bao cong suat, dien nang phat cua cac nguon
dien nang luong tai tao, 2024. Decision No. 07, signed date 14/3.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy