0% found this document useful (0 votes)
11 views5 pages

1908.11543

Uploaded by

Gustavo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views5 pages

1908.11543

Uploaded by

Gustavo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

A Deep Reinforcement Learning Based Approach for

Optimal Active Power Dispatch


Jiajun Duan1, Haifeng Li2, Xiaohu Zhang1, Ruisheng Diao1, Bei Zhang1,
Di Shi1*, Xiao Lu2, Zhiwei Wang1, Siqi Wang1
1
Global Energy Interconnection Research Institute North America (GEIRINA), San Jose, USA
2
State Grid Jiangsu Electric Power Company, Nanjing, CN
di.shi@geirina.net

Abstract—The stochastic and dynamic nature of renewable optimal cost. Furthermore, under certain circumstances, such as
energy sources and power electronic devices are creating unique N-1 or N-k, the typical mathematical modeling approach will
challenges for modern power systems. One such challenge is that become even more challenging due to the high complexity of
the conventional mathematical systems models-based optimal the problem [5].
active power dispatch (OAPD) method is limited in its ability to To date, several adaptive methods have been proposed to
handle uncertainties caused by renewables and other system handle a range of environment uncertainties [6-8]. In [6], an
contingencies. In this paper, a deep reinforcement learning-based adaptive robust optimization method is proposed for multi-
(DRL) method is presented to provide a near-optimal solution to period economic dispatch (ED) to address the dynamic
the OAPD problem without system modeling. The DRL agent
uncertainty caused by wind power. The highly dynamic
undergoes offline training, based on which, it is able to obtain the
OAPD points under unseen scenarios, e.g., different load
uncertainty is formulated as a bounded variable, and the power
patterns. The DRL-based OAPD method is tested on the IEEE 14- flow constraint is simply formulated as a power balance
bus system, thereby validating its feasibility to solve the OAPD constraint between load and generation. Reference [7] further
problem. Its utility is further confirmed in that it can be leveraged supplements this method by presenting an adaptive robust ED
as a key component for solving future model-free AC-OPF method for tie-line scheduling while factoring in multi-area
problems. power systems with wind power penetration. A similar adaptive
robust strategy for ED is proposed in [8], in which ED is
Index Terms—Artificial intelligence, optimal active power implemented in a distributed manner using a consensus-based
dispatch, deep reinforcement learning, neural network. framework. Here, communication defects are studied during
problem formulation. The adaptive-based methods are usually
I. INTRODUCTION designed via a two-stage decision-making structure, with one
High penetration of renewable generation, new energy of the stages estimating the time-varying system dynamics.
storage devices, and emerging electricity market behavior have However, a common issue existing in this approach is that the
all thoroughly changed the characteristics of conventional problems are formulated based on linearized system models
power grids, bringing significant uncertainties (e.g., Californian derived from a dc power flow analysis – an assumption that is
duck curves) [1]. Conventional power grid operation methods improper, particularly since the power system is well-known
that greatly rely on the system model now face grand challenges for its high nonlinearity and non-convexity, and especially
[2]. For example, if a mathematical model of the power grid is when the system has high penetration of renewable generation
not available or is inaccurate, most of the existing model-based and power electronic devices.
methods will lose their effectiveness. The same issue also One solution proposed in [9] is to use a nonlinear function
occurs in optimal active power dispatch (OAPD) approaches or neural network (NN) instead of an adaptive function to
where the system model is usually required [3]. estimate the nonlinear system dynamics. However, the online-
When using conventional OAPD methods, the optimization training process of this approach is computationally intensive
problem is typically formulated as an objective function and is and vulnerable to environmental noises. Even though it has
subjected to various constraints under known system conditions been fully tested in the experimental environment, its practical
and models, resulting in an optimal solution derived from performance cannot be guaranteed.
known system dynamics [4]. This type of formulation, however, Recently, artificial intelligence (AI) in various applications
has a major drawback. Once the real system condition changes such as AlphaGo, ATARI games, and robotics have shed light
and the system model becomes inaccurate, the calculated on potential use in deep reinforcement learning (DRL)
optimal operation cost may greatly deviate from the actual approaches to solve optimal control problems [10]. After
adequate off-line training, the DRL agent can make a series of
The project is supported by State Grid Science and Technology optimal decisions to achieve the best goal.
Program.
In this paper, a DRL-based method is proposed to solve the decaying ε-greedy policy is developed to solve the OAPD
AC-OAPD problem. We modify the double-deep-Q-network problem.
(DDQN) based algorithm proposed in [11] to dynamically
adjust the generation output of each generator. The proposed B. Principles of DQN and DDQN
method considers full ac power flow equations; similarly, the The deep-Q-network (DQN) is a value-based DRL
corresponding constraints are addressed in either the DRL algorithm developed from the classical RL method, i.e., Q-
formulation or the power system environment. It should be learning, where a deep neural network (DNN) is applied to
noted that since this work is mainly focused on the conceptual estimate Q-function. Due to the estimation/prediction
proof of the feasibility of using DRL to solve the OAPD capability of DNN, the DQN method can handle the unseen
problem, the thermal limit of the transmission line and voltage states [12]. A general formulation for updating the value
adjustments for the generator bus are not taken into function can be presented as
consideration. ′
𝑄(𝑠,𝑎) = 𝑄(𝑠,𝑎) + 𝛼[𝑟 + 𝛾𝑚𝑎𝑥𝑄(𝑠′ ,𝑎′ ) − 𝑄(𝑠,𝑎) ] (1)
After a certain period of training, the DRL agent is able to
solve the AC-OAPD problem quickly even under unseen where α is the learning rate and γ is the discount rate; s and s’
scenarios. The proposed framework is a key component of the represent the current state and next state, respectively; a and a’
Grid Mind platform, which has been developed for autonomous represent the current action and next action, respectively. The
grid operation using state-of-the-art AI techniques [10]. We parameters of DNN are updated by minimizing the error
believe that this framework holds promise to be generalized in between actual and estimated Q-values ([𝑟 + 𝛾𝑚𝑎𝑥𝑄(𝑠′ ,𝑎′ ) −
the future to solve the AC-OPF problem. The effectiveness of
𝑄(𝑠,𝑎) ]).
the proposed DRL-based AC-OAPD method is demonstrated
through simulations on the IEEE 14-bus system. To solve the convergence problem, a double DNN structure
The remainder of this paper is organized as follows. Section is applied in DQN, which is then formulated as a new algorithm
II provides background on DRL and the formulation of AC- called DDQN. One of the DNNs, termed as the target network,
OAPD problem. Section III discusses in-depth implementation. fixes its parameters for a period of time and updates
In Section IV, case studies are performed using the proposed periodically. The other DNN called evaluation network
method, with promising results on the IEEE 14-bus test system. continuously updates the parameters based on the estimation
Finally, conclusions are drawn in Section V and future research error of the target network. Both networks have the same
work is also identified. structure, but with different parameters. In this way, the
occasional oscillation caused by the big temporal difference
II. METHODOLOGY OF DRL error can be avoided.
A. DRL: Background To balance the exploration and exploitation, the decaying ε-
greedy method is applied during the training process. Thus, in
A general framework diagram of DRL is illustrated in Fig. each iteration ith, the DRL agent has a probability of εi to choose
1. The environment block is a physical or dynamic system, e.g., an action randomly, and this probability keeps decaying, along
a power grid. The DRL agent is a controller that continuously with the training process as
interacts with the environment. The iteration of interactions
starts when the DRL agent receives the state measurement (s) 𝑟𝑑 × 𝜀𝑖 , if 𝜀𝑖 > 𝜀𝑚𝑖𝑛
𝜀𝑖+1 = { (2)
of the environment. The agent then provides a control action (a) 𝜀𝑚𝑖𝑛 , else
according to the objective and definition of the reward function.
where rd is a constant decay rate. By properly tuning the value
After the environment conducts the action, it generates a new
of ε, the DRL agent will have a better chance to reach the global
state (s’) and the corresponding reward (r). Based on the new
optimal, e.g., adding more perturbations in local optimal.
observation of state and reward, the DRL agent then updates the
Various regularization methods have also been applied
framework and continues to improve its strategy.
during the DRL agent training process to improve performance,
s e.g., random layer dropout, feature/batch normalization, and
DRL prioritized sampling [13].
r
Agent III. FORMULATION OF DRL-BASED OAPD
State Reward Action In this section, the DRL method is formulated to solve the
r' a optimal active power dispatch problem. First, the formulation
Environment of using proposed DRL to solve OAPD is presented. Then, the
s' implementation details of the training environment of the DRL
Fig. 1 The framework diagram of deep reinforcement learning. agent is illustrated.
A. Formulation of DRL for OAPD
Even though the global optimal solution of DRL cannot be
guaranteed, the DRL mechanism makes it a promising tool to In the formulation, the full AC power flow equations are
solve the optimization problem. For example, for a convex considered including the constraints of bus voltage, active
problem, the DRL solution can be guaranteed to be globally power, and reactive power of generators. The bus voltage
optimal. For a non-convex problem, on the other hand, various constraint is considered through training of DRL agents, which
methods such as evolving program and random perturbation is similar to the autonomous voltage control function in the Grid
can be applied to avoid being trapped into the local optimal [16]. Mind platform [10]. The active power generation limit of a
In this work, the double-deep-Q-network (DDQN) method with generator is defined in the action space, and the reactive power
generation limit of a generator is enforced by the in-house operators, the three items (𝑅𝑝 − 𝛼, −𝑅𝑛 and −𝑅𝑒 ) need to be
developed power flow solver for the Linux operating system. properly weighted to achieve the desired target.
Since this work is mainly for conceptual proof, the line flow
constraint is omitted due to the limitation of the current power ( )
𝑎𝑎 = , 𝑠𝑠
𝑖 𝑛 𝑎 𝑒𝑠 , 𝑟 𝑒 𝑎 𝑑
flow solver in the current version; however, it can be easily
incorporated into future versions. GridMind G#1 G#2 G#n
+0.5 MW
The system states feeding into the DRL agent include the Action [0.5 0.5 0.5]

phasor voltage, the active, and reactive power flow on the 0 MW [0.5 0.5 ...0]
Action 3N
transmission lines. However, for the generator located at the States
slack bus, which is used to balance the demand and supply, the [-0.5 -0.5 ...0]
action space of each generator is defined as [+0.5, -0.5, 0] MW. Reward
-0.5 MW [-0.5 -0.5 -0.5]
This means that the active power output of each generator can
increase by 0.5 MW, decrease by 0.5 MW or remain unchanged Power
system
at the current operating point. Thus, for a power system
containing N generators (excluding slacking bus), the total Fig. 3. Designed RL agent for optimal active power dispatch.
action space is 3N-1. Before the action is actually applied to the
environment, the active power generation constraint of each The diagram of implementing the proposed DRL agent for
generator is checked to guarantee that it is reasonable. In OPED is shown in Fig. 3. During the training process, a
addition, the initial bus voltage magnitude settings of the complete episode is defined as follows. The agent observes the
generator are fixed at the optimal solution provided by the state of an independent snapshot of power flow case and takes
classic AC-OPF solver, i.e., MATPOWER [14]. Then, during various actions to interact with the environment to maximize
the training process, the DRL agent continually checks to see if the reward until one of the termination conditions is met. The
there is any voltage violation. If so, corresponding rewards are termination of one episode is triggered when one of the
assigned to the agent to avoid future voltage violation. The following conditions is met: 1) maximum iteration time is
predefined operation zone for voltage is illustrated in Fig. 2, reached, 2) bus voltage violation occurs, and 3) power flow
where the desired target is defined to keep the voltage within diverges. During the testing period, two different principles can
0.9 to 1.1 per unit (p.u.). Meanwhile, the agent will act to be applied to obtain the optimal solution. The first method is to
decrease the operation cost. Correspondingly, the reward continuously compare the OAPD result with the benchmark
function can be defined as system available in MATPOWER [14]. Once the calculated
cost is equivalent to or fewer than the optimal cost, the DRL
(𝑅𝑝 − 𝛽 ∗ 𝑐𝑜𝑠𝑡), ∀ 𝑉𝑗 ∈ [0.95,1.05]pu
agent can output the solution. The second method is to allow
𝑅𝑖 = { (−𝑅𝑛 ), ∃ 𝑉𝑗 ∉ [0.95,1.05]pu (3) the agent to take a predefined maximum iteration time. Once
(−𝑅𝑒 ), Power flow diverged the maximum iteration time is reached, the result with the
minimum cost will be the output.
where 𝑅𝑝 is a positive reward, −𝑅𝑛 is a negative reward, and In this paper, the second method is applied during the
−𝑅𝑒 is a large negative penalty; 𝛽 is a constant weight to scale training period so that the DRL agent is encouraged to find a
the generation cost. better solution, and the first method is applied during the testing
period to evaluate the efficiency. In the next section, we show
Diverged solution
that although it takes hundreds of steps for the DRL agent to
Large penalty find the optimal solution during the training period, it only takes
Voltage magnitude in pu

1.25 tens steps to solve the problem during the testing period.
Violation zone Negative reward
1.05 B. Implementation of DRL Agent for OAPD
Normal operation zone
1.0 The platform used to train and test DRL agents for
Positive reward
0.95 autonomous voltage control is Ubuntu 16 Linux Operation
Violation zone Negative reward System (64 bit). This server is equipped with Intel Xeon E7-
0.8 8893 v3 CPU at 3.2 GHz and 528 GB memory. All the DRL
Diverged solution Large penalty training and testing processes are performed on this platform.
To mimic a real power system environment, an in-house
Buses of interest in a power system developed power grid simulator is adopted, which can be run
Fig. 2 The predefined voltage profile zone. in a Linux environment with multiple functional modules, such
as AC power flow and voltage stability analysis. In this work,
Define Pi as the active power generation of generator i, a the AC power flow module is leveraged to interact with the
quadratic cost function that can be formulated as DRL agent. Intermediate files are used to exchange
information between grid simulator and the DRL agent,
𝑐𝑜𝑠𝑡 = ∑𝑛𝑖=1 𝑎𝑖 𝑃𝑖2 + 𝑏𝑖 𝑃𝑖 (4) including power the flow information file saved in PSS/E v26
where 𝑎𝑖 and 𝑏𝑖 are cost coefficients. Note that since the raw format and power flow solution results saved in text files.
system security is one of the primary concerns for the system The proposed framework is programmed using Python
language with TensorFlow libraries [15].
IV. CASE STUDIES AND DISCUSSION
The proposed DRL method for autonomous voltage control A. Case Study I – Different Initial Power Generation Points
is tested on the IEEE 14-bus system model. The single-line In this case study, the loading condition is fixed. Under this
diagram is shown in Fig. 4. The IEEE 14-bus system consists load pattern, the optimal generation cost calculated with
of 14 buses, 5 generators, 11 loads, 17 lines, and 3 MATPOWER is 7.1341 k$/hr, which can be considered as a
transformers. Since there is one generator located at the slack standard optimization solution. Representative results obtained
bus, the total action space is 34 = 81. The total system load in by using DRL agent are summarized in Table II. As can be
the base case is 242 MW and 73.5 MVAr. seen, based on the simulation results of 45 different cases, four
To demonstrate the effectiveness and adaptability of the different types of solutions are obtained. Among them, the
proposed OAPD method, two different case studies are standard optimization solution is achieved in 29 cases (i.e.,
conducted. In the first study, the agent is trained using one base 63%).
case on a normal operating condition. Then, the agent is tested There are several reasons that lead to a sub-optimal solution.
on an additional 45 cases with the same loading condition, but As mentioned before, the DRL algorithm is tented to reach the
different initial values of power generation. In the second case, sub-optimal because of its gradient-based policy.
the agent trained in the first case is tested on an additional 4 Nevertheless, promising results are still observed by using
cases with different load levels, e.g., 80%~120%. different techniques. The second reason is the limitation of our
The detailed parameter settings of the tested system and the in-house developed power flow solver. The minimum
DRL agent are given in Table I. tolerance of power mismatch can only be set to 2e-3. With
further development on the power flow solver, better
performance can be achieved. The third reason is that the
action can only be adjusted discretely in a step range of 0.5
MW. Thus, certain values of active power generation are
omitted. With a smaller step range, e.g., 0.1 MW, the DRL
agent will have a better chance to reach the global optimal.
However, this becomes a trade-off between speed and
accuracy since more iterations will be expected with a smaller
step change.
An example in Fig. 5 presents the exploration process of the
DRL agent to minimize the cost. As can be observed, during
the exploration process, the DRL agent is trapped in the sub-
optimal point several times. By including the corresponding
perturbation techniques, the DRL agent is then forced to adjust
its actions and policy for further exploration and to ultimately
achieve its goal.

Fig. 4. Single-line diagram of the IEEE 14-bus system. a local optimal


TABLE I
MAJOR PARAMETERS OF DRL
Description Parameters Description Parameters Fig. 5. Cost profile of DRL exploration process.
𝛼 0.001 𝛾 0.95
TABLE II
𝑟𝑑 0.999 𝛽 0.02
RESULT SUMMARY OF CASE STUDY I
Memory Size 2000 Mini Batch 200
P Constraint of P Constraint of OAPD Solution of MATPOWER
37.9~245.4 48.1~157.5 Case # Active Power (MW) Cost Power
G1 (MW) G2 (MW)
P Constraint of P Constraint of (k$/hr) Mismatch
5.8~82.5 11.4~110.5 1 [176.9, 48.2, 5.8, 11.5, 0.3] 7.1341 1e-9
G3 (MW) G4 (MW)
P Constraint of Q Constraint of OAPD Solution of DRL
0.3~80.1 -132.2~76.1
G5 (MW) G1 (MVAR) Case # Active Power (MW) Cost Power
Q Constraint of Q Constraint of (k$/hr) Mismatch
-76.9~36.1 -0.8~48.5
G2 (MVAR) G3 (MVAR) 1 [176.4, 48.2, 5.8, 11.9, 0.3] 7.1361 4e-3
Q Constraint of Q Constraint of 2 [176.9, 48.2, 5.8, 11.5, 0.3] 7.1341 2e-3
-19.1~19.4 -8.0~19.2
G4 (MVAR) G5 (MVAR) 3 [175.9, 48.2, 6.3 11.5, 0.8] 7.1390 8e-3
a1, a2, a3, a4, a5 0.043, 0.25, b1, b2, b3, b4, b5 20, 20, 40, 40, 4 [176.2, 48.2, 5.8, 12.2, 0.3] 7.1376 6e-3
0.01, 0.01, 0.01 40
system, the proposed DRL-based method is demonstrated to be
It is also worth mentioning that during the training period, an effective tool to solve the OAPD problem under complex
the DRL agent takes more than 1,000 steps to reach the optimal and unknown system conditions.
solution. However, during the testing period, the DRL agent In future work, other practical constraints such as line flow
only takes an average of 20 steps to find the solution. This limits will be considered and formulated into the DRL method.
further demonstrates that the strong learning and estimating The voltage magnitude will be cooperatively adjusted, together
capability of DRL mechanism make it an effective tool for with the reactive power. In addition, larger power system
solving the OAPD problem. networks with or without contingencies will be used to test the
B. Case Study II – Different Loading Conditions performance of the DRL agent.
In this case, four different loading conditions of 80%, 90%, REFERENCES
110%, and 120%. are applied to test the effectiveness of the [1] X. Zhang, D. Shi, Z. Wang, B. Zeng, X. Wang, K. Tomsovic, and Y. Jin,
proposed method. Under each loading condition, the active “Optimal allocation of series FACTS devices under high penetration of
power generation of each generator is first re-dispatched based wind power within a market environment,” IEEE Transactions on Power
on its inertia. Then, the DRL agent is applied to find the OAPD Systems, vol. 33, no. 6, pp. 6206-6217, 2018.
[2] J. Duan, H. Xu, and W. Liu, "Q-learning-based damping control of wide-
solution. The results are summarized in Table III. area power systems under cyber uncertainties," IEEE Transactions on
As can be seen, even if the DRL agent is only trained in a Smart Grid, vol. 9, no. 6, pp. 6408-6418, Nov. 2018.
base case, it can successfully solve the optimal generation [3] Q. Yao, J. Liu, and Y. Hu, “Optimized active power dispatching strategy
dispatch problem under unseen scenarios. Moreover, the DRL considering fatigue load of wind turbines during de-loading operation,”
IEEE Access, vol. 7, pp. 17439-17449, 2019.
agent is occasionally able to find a better solution, e.g., 120% [4] H. Zhao, Q. Wu, and S. Huang, et al., "Fatigue load sensitivity-based
loading condition (with larger power mismatch error), thereby optimal active power dispatch for wind farms," IEEE Transactions on
further demonstrating the adaptability of the proposed DRL Sustainable Energy, vol. 8, no. 3, pp. 1247-1259, 2017.
method for unknown system conditions. [5] R. Diao, V. Vittal, K. Sun, et. al, “Decision tree assisted controlled
islanding for preventing cascading events,” IEEE Power Systems
Conference & Exposition Meeting, Seattle, March 2009.
TABLE III [6] Á. Lorca and X. A. Sun, “Adaptive robust optimization with dynamic
RESULT SUMMARY OF CASE STUDY II uncertainty sets for multi-period economic dispatch under the significant
OAPD Solution of MATPOWER wind,” IEEE Transactions on Power Systems, vol. 30, no. 4, pp. 1702-
Loading Active Power (MW) Cost Power 1713, 2015.
(k$/hr) Mismatch [7] Z. Li, W. Wu, M. Shahidehpour, and B. Zhang, "Adaptive robust tie-line
80% [126.7, 48.1, 5.8, 11.4, 0.3] 5.4663 1e-9 scheduling considering wind power uncertainty for interconnected
90% [151.7, 48.1, 5.8, 11.4, 0.3] 6.2656 1e-9 power systems,” IEEE Transactions on Power Systems, vol. 31, no. 4,
110% [201.7, 48.2, 5.8, 11.5, 0.3] 8.0338 1e-9 pp. 2701-2713, 2016.
120% [195.3, 48.1, 26.2, 11.4, 9.7] 8.9876 1e-9 [8] G. Wen, X. Yu, Z. Liu, and W. Yu, "Adaptive consensus-based robust
OAPD Solution of DRL strategy for economic dispatch of smart grids subject to communication
Case # Active Power (MW) Cost Power uncertainties,” IEEE Transactions on Industrial Informatics, vol. 14, no.
(k$/hr) Mismatch 6, pp. 2484-2496, 2018.
80% [125.5, 49.2, 5.8, 11.5, 0.3] 5.4821 4e-3 [9] J. G. Vlachogiannis and N. D. Hatziargyriou, “Reinforcement learning
90% [151.6, 48.2, 5.8, 11.4, 0.3] 6.2667 2e-3 for reactive power control,” IEEE Transactions on Power Systems, vol.
110% [201.7, 48.2, 5.8, 11.5, 0.3] 8.0338 2e-3 19, no. 3, pp. 1317-1325, 2004.
120% [215.4, 48.2, 8.3, 12.5, 6.3] 8.9345 5e-2 [10] R. Diao, Z. Wang, and D. Shi, et al., “Autonomous voltage control for
grid operation using deep reinforcement learning,” arXiv preprint
arXiv:1904.10597, 2019.
In conclusion, all the above presented results have [11] V. Mnih, K. Kavukcuoglu, D. Silver, et al., “Human-level control
demonstrated the effectiveness of the proposed DRL-based through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp.
OAPD method. Upon sufficient training, the DRL agent is 529, 2015.
[12] D. Silver, A. Huang, and C.J. Maddison, et al., “Mastering the game of
shown to have the capability of solving the complex Go with deep neural networks and tree search,” Nature, vol. 529, no.
nonconvex optimization problem under unknown system 7587, pp. 484, 2016.
dynamics. [13] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep
network training by reducing internal covariate shift,” arXiv preprint
V. CONCLUSIONS AND FUTURE WORK arXiv: 1502.03167, 2015.
[14] R. D. Zimmerman, C. E. Sanchez, and R. J. Thomas, "MATPOWER:
To effectively solve the optimal active power dispatch steady-state operations, planning, and analysis tools for power systems
problem under growing uncertainties, this paper develops a research and education," IEEE Transactions on Power Systems, vol. 26,
novel method that uses the deep reinforcement learning no. 1, pp. 12-19, 2011.
method to search for optimal operating conditions with [15] M. Abadi, et al., “Tensorflow: A system for large-scale machine
learning,” presented at the 12th Symposium on Operating Systems
minimum generation cost. The detailed formulation and Design and Implementation, 2016, pp. 265-283.
implementation process are introduced. The advantages of this [16] T. Weise and K. Tang, “Evolving distributed algorithms with genetic
approach, as well as its limitations are discussed and analyzed. programming,” IEEE Transactions on Evolutionary Computation, vol.
Based on the results of tests performed on the IEEE 14-bus 16, no. 2, pp. 242-265, 2012.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy