Reinforcement_learning
Reinforcement_learning
Reinforcement_learning
The environment is typically stated in the form of a Markov decision process (MDP), as many
reinforcement learning algorithms use dynamic programming techniques.[2] The main difference between
classical dynamic programming methods and reinforcement learning algorithms is that the latter do not
assume knowledge of an exact mathematical model of the Markov decision process, and they target large
MDPs where exact methods become infeasible.[3]
Principles
Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control
theory, operations research, information theory, simulation-based optimization, multi-agent systems,
swarm intelligence, and statistics. In the operations research and control literature, RL is called
approximate dynamic programming, or neuro-dynamic programming. The problems of interest in RL
have also been studied in the theory of optimal control, which is concerned mostly with the existence and
characterization of optimal solutions, and algorithms for their exact computation, and less with learning
or approximation (particularly in the absence of a mathematical model of the environment).
A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time
step t, the agent receives the current state and reward . It then chooses an action from the set of
available actions, which is subsequently sent to the environment. The environment moves to a new state
and the reward associated with the transition is determined. The goal of a
reinforcement learning agent is to learn a policy:
Formulating the problem as a Markov decision process assumes the agent directly observes the current
environmental state; in this case, the problem is said to have full observability. If the agent only has
access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have
partial observability, and formally the problem must be formulated as a partially observable Markov
decision process. In both cases, the set of actions available to the agent can be restricted. For example, the
state of an account balance could be restricted to be positive; if the current value of the state is 3 and the
state transition attempts to reduce the value by 4, the transition will not be allowed.
When the agent's performance is compared to that of an agent that acts optimally, the difference in
performance yields the notion of regret. In order to act near optimally, the agent must reason about long-
term consequences of its actions (i.e., maximize future rewards), although the immediate reward
associated with this might be negative.
Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-
term reward trade-off. It has been applied successfully to various problems, including energy storage,[6]
robot control,[7] photovoltaic generators,[8] backgammon, checkers,[9] Go (AlphaGo), and autonomous
driving systems.[10]
Two elements make reinforcement learning powerful: the use of samples to optimize performance, and
the use of function approximation to deal with large environments. Thanks to these two key components,
RL can be used in large environments in the following situations:
Exploration
The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed
bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis
(1997).[12]
Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without
reference to an estimated probability distribution, shows poor performance. The case of (small) finite
Markov decision processes is relatively well understood. However, due to the lack of algorithms that
scale well with the number of states (or scale to problems with infinite state spaces), simple exploration
methods are the most practical.
One such method is -greedy, where is a parameter controlling the amount of exploration vs.
exploitation. With probability , exploitation is chosen, and the agent chooses the action that it
believes has the best long-term effect (ties between actions are broken uniformly at random).
Alternatively, with probability , exploration is chosen, and the action is chosen uniformly at random. is
usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore
progressively less), or adaptively based on heuristics.[13]
Criterion of optimality
Policy
The agent's action selection is modeled as a map called policy:
The policy map gives the probability of taking action when in state .[14]: 61 There are also deterministic
policies.
State-value function
The state-value function is defined as, expected discounted return starting with state , i.e. ,
and successively following policy . Hence, roughly speaking, the value function estimates "how good"
it is to be in a given state.[14]: 60
where the random variable denotes the discounted return, and is defined as the sum of future
discounted rewards:
where is the reward for transitioning from state to , is the discount rate. is less
than 1, so rewards in the distant future are weighted less than rewards in the immediate future.
The algorithm must find a policy with maximum expected discounted return. From the theory of Markov
decision processes it is known that, without loss of generality, the search can be restricted to the set of so-
called stationary policies. A policy is stationary if the action-distribution returned by it depends only on
the last state visited (from the observation agent's history). The search can be further restricted to
deterministic stationary policies. A deterministic stationary policy deterministically selects actions based
on the current state. Since any such policy can be identified with a mapping from the set of states to the
set of actions, these policies can be identified with such mappings with no loss of generality.
Brute force
The brute force approach entails two steps:
These problems can be ameliorated if we assume some structure and allow samples generated from one
policy to influence the estimates made for others. The two main approaches for achieving this are value
function estimation and direct policy search.
Value function
Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a
set of estimates of expected discounted returns for some policy (usually either the "current" [on-
policy] or the optimal [off-policy] one).
These methods rely on the theory of Markov decision processes, where optimality is defined in a sense
stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from
any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can
always be found among stationary policies.
A policy that achieves these optimal state-values in each state is called optimal. Clearly, a policy that is
optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since
, where is a state randomly sampled from the distribution of initial states
(so ).
Although state-values suffice to define optimality, it is useful to define action-values. Given a state , an
action and a policy , the action-value of the pair under is defined by
where now stands for the random discounted return associated with first taking action in state and
following , thereafter.
The theory of Markov decision processes states that if is an optimal policy, we act optimally (take the
optimal action) by choosing the action from with the highest action-value at each state, . The
action-value function of such an optimal policy ( ) is called the optimal action-value function and is
commonly denoted by . In summary, the knowledge of the optimal action-value function alone
suffices to know how to act optimally.
Assuming full knowledge of the Markov decision process, the two basic approaches to compute the
optimal action-value function are value iteration and policy iteration. Both algorithms compute a
sequence of functions ( ) that converge to . Computing these functions involves
computing expectations over the whole state-space, which is impractical for all but the smallest (finite)
Markov decision processes. In reinforcement learning methods, expectations are approximated by
averaging over samples and using function approximation techniques to cope with the need to represent
value functions over large state-action spaces.
Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually
terminate. Policy and value function updates occur only after the completion of an episode, making these
methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The
term "Monte Carlo" generally refers to any method involving random sampling; however, in this context,
it specifically refers to methods that compute averages from complete returns, rather than partial returns.
These methods function similarly to the bandit algorithms, in which returns are averaged for each state-
action pair. The key difference is that actions taken in one state affect the returns of subsequent states
within the same episode, making the problem non-stationary. To address this non-stationarity, Monte
Carlo methods use the framework of general policy iteration (GPI). While dynamic programming
computes value functions using full knowledge of the Markov decision process (MDP), Monte Carlo
methods learn these functions through sample returns. The value functions and policies interact similarly
to dynamic programming to achieve optimality, first addressing the prediction problem and then
extending to policy improvement and control, all based on sampled experience.[14]
The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them.
This may also help to some extent with the third problem, although a better solution when returns have
high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman
equation.[16][17] The computation in TD methods can be incremental (when after each transition the
memory is changed and the transition is thrown away), or batch (when the transitions are batched and the
estimates are computed once based on the batch). Batch methods, such as the least-squares temporal
difference method,[18] may use the information in the samples better, while incremental methods are the
only choice when batch methods are infeasible due to their high computational or memory complexity.
Some methods try to combine the two approaches. Methods based on temporal differences also overcome
the fourth issue.
Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD
methods have a so-called parameter that can continuously interpolate between Monte
Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on
the Bellman equations. This can be effective in palliating this issue.
The algorithms then adjust the weights, instead of adjusting the values associated with the individual
state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct
their own features) have been explored.
Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many
variants.[19] Including Deep Q-learning methods when a neural network is used to represent Q, with
various applications in stochastic search problems.[20]
The problem with using action-values is that they may need highly precise estimates of the competing
action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to
some extent by temporal difference methods. Using the so-called compatible function approximation
method compromises generality and efficiency.
Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional
(parameter) space to the space of policies: given the parameter vector , let denote the policy
associated to . Defining the performance function by under mild conditions this function
will be differentiable as a function of the parameter vector . If the gradient of was known, one could
use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is
available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's
REINFORCE method[21] (which is known as the likelihood ratio method in the simulation-based
optimization literature).[22]
A large class of methods avoids relying on gradient information. These include simulated annealing,
cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve
(in theory and in the limit) a global optimum.
Policy search methods may converge slowly given noisy data. For example, this happens in episodic
problems when the trajectories are long and the variance of the returns is large. Value-function based
methods that rely on temporal differences might help in this case. In recent years, actor–critic methods
have been proposed and performed well on various problems.[23]
Policy search methods have been used in the robotics context.[24] Many policy search methods may get
stuck in local optima (as they are based on local search).
Model-based algorithms
Finally, all of the above methods can be combined with algorithms that first learn a model of the Markov
decision process, the probability of each next state given an action taken from an existing state. For
instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled
transitions for a value function, in addition to the real transitions.[25] Such methods can sometimes be
extended to use of non-parametric models, such as when the transitions are simply stored and "replayed"
to the learning algorithm.[26]
Model-based methods can be more computationally intensive than model-free approaches, and their
utility can be limited by the extent to which the Markov decision process can be learnt.[27]
There are other ways to use models than to update a value function.[28] For instance, in model predictive
control the model is used to update the behavior directly.
Theory
Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with
provably good online performance (addressing the exploration issue) are known.
Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997).[12] Finite-
time performance bounds have also appeared for many algorithms, but these bounds are expected to be
rather loose and thus more work is needed to better understand the relative advantages and limitations.
For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based
algorithms converge under a wider set of conditions than was previously possible (for example, when
used with arbitrary, smooth function approximation).
Research
Research topics include:
actor-critic architecture[29]
actor-critic-scenery architecture [3]
adaptive methods that work with fewer (or no) parameters under a large number of
conditions
bug detection in software projects[30]
continuous learning
combinations with logic-based frameworks[31]
exploration in large Markov decision processes
human feedback[32]
interaction between implicit and explicit learning in skill acquisition
intrinsic motivation which differentiates information-seeking, curiosity-type behaviours from
task-dependent goal-directed behaviours large-scale empirical evaluations
large (or continuous) action spaces
modular and hierarchical reinforcement learning[33]
multiagent/distributed reinforcement learning is a topic of interest. Applications are
expanding.[34]
occupant-centric control
optimization of computing resources[35][36][37]
partial information (e.g., using predictive state representation)
reward function based on maximising novel information[38][39][40]
sample-based planning (e.g., based on Monte Carlo tree search).
securities trading[41]
transfer learning[42]
TD learning modeling dopamine-based learning in the brain. Dopaminergic projections from
the substantia nigra to the basal ganglia function are the prediction error.
value-function and policy search methods
State
Algorithm Description Policy Action space Operator
space
Sample-means of
Every visit to Monte
Monte Carlo Either Discrete Discrete state-values or action-
Carlo
values
State–action–reward– Off-
TD learning Discrete Discrete State-value
state policy
State–action–reward– Off-
Q-learning Discrete Discrete Action-value
state policy
State–action–reward– On-
SARSA Discrete Discrete Action-value
state–action policy
Off-
DQN Deep Q Network Discrete Continuous Action-value
policy
Off-
SAC Soft Actor-Critic Continuous Continuous Advantage
policy
Distributional Soft Actor Off- Action-value
DSAC[43][44][45] Critic policy
Continuous Continuous
distribution
Self-reinforcement learning
Self-reinforcement learning (or self-learning), is a learning paradigm which does not use the concept of
immediate reward after transition from to with action . It does not use an external
reinforcement, it only uses the agent internal self-reinforcement. The internal self-reinforcement is
provided by mechanism of feelings and emotions. In the learning process emotions are backpropagated
by a mechanism of secondary reinforcement. The learning equation does not include the immediate
reward, it only includes the state evaluation.
The self-reinforcement algorithm updates a memory matrix such that in each iteration
executes the following machine learning routine:
Self-reinforcement (self-learning) was introduced in 1982 along with a neural network capable of self-
reinforcement learning, named Crossbar Adaptive Array (CAA).[66][67] The CAA computes, in a crossbar
fashion, both decisions about actions and emotions (feelings) about consequence states. The system is
driven by the interaction between cognition and emotion. [68]
See also
Temporal difference learning Optimal control
Q-learning Error-driven learning
State–action–reward–state–action Multi-agent reinforcement learning
(SARSA) Apprenticeship learning
Reinforcement learning from human Model-free (reinforcement learning)
feedback active learning (machine learning)
References
1. Kaelbling, Leslie P.; Littman, Michael L.; Moore, Andrew W. (1996). "Reinforcement
Learning: A Survey" (http://webarchive.loc.gov/all/20011120234539/http://www.cs.washingto
n.edu/research/jair/abstracts/kaelbling96a.html). Journal of Artificial Intelligence Research.
4: 237–285. arXiv:cs/9605103 (https://arxiv.org/abs/cs/9605103). doi:10.1613/jair.301 (http
s://doi.org/10.1613%2Fjair.301). S2CID 1708582 (https://api.semanticscholar.org/CorpusID:
1708582). Archived from the original (http://www.cs.washington.edu/research/jair/abstracts/k
aelbling96a.html) on 2001-11-20.
2. van Otterlo, M.; Wiering, M. (2012). "Reinforcement Learning and Markov Decision
Processes". Reinforcement Learning. Adaptation, Learning, and Optimization. Vol. 12.
pp. 3–42. doi:10.1007/978-3-642-27645-3_1 (https://doi.org/10.1007%2F978-3-642-27645-3
_1). ISBN 978-3-642-27644-6.
3. Li, Shengbo (2023). Reinforcement Learning for Sequential Decision and Optimal Control (h
ttps://link.springer.com/book/10.1007/978-981-19-7784-8) (First ed.). Springer Verlag,
Singapore. pp. 1–460. doi:10.1007/978-981-19-7784-8 (https://doi.org/10.1007%2F978-981-
19-7784-8). ISBN 978-9-811-97783-1. S2CID 257928563 (https://api.semanticscholar.org/C
orpusID:257928563).
4. Russell, Stuart J.; Norvig, Peter (2010). Artificial intelligence : a modern approach
(Third ed.). Upper Saddle River, New Jersey: Prentice Hall. pp. 830, 831. ISBN 978-0-13-
604259-4.
5. Lee, Daeyeol; Seo, Hyojung; Jung, Min Whan (21 July 2012). "Neural Basis of
Reinforcement Learning and Decision Making" (https://www.ncbi.nlm.nih.gov/pmc/articles/P
MC3490621). Annual Review of Neuroscience. 35 (1): 287–308. doi:10.1146/annurev-
neuro-062111-150512 (https://doi.org/10.1146%2Fannurev-neuro-062111-150512).
PMC 3490621 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3490621). PMID 22462543
(https://pubmed.ncbi.nlm.nih.gov/22462543).
6. Salazar Duque, Edgar Mauricio; Giraldo, Juan S.; Vergara, Pedro P.; Nguyen, Phuong; Van
Der Molen, Anne; Slootweg, Han (2022). "Community energy storage operation via
reinforcement learning with eligibility traces" (https://doi.org/10.1016%2Fj.epsr.2022.10851
5). Electric Power Systems Research. 212. Bibcode:2022EPSR..21208515S (https://ui.adsa
bs.harvard.edu/abs/2022EPSR..21208515S). doi:10.1016/j.epsr.2022.108515 (https://doi.or
g/10.1016%2Fj.epsr.2022.108515). S2CID 250635151 (https://api.semanticscholar.org/Corp
usID:250635151).
7. Xie, Zhaoming; Hung Yu Ling; Nam Hee Kim; Michiel van de Panne (2020). "ALLSTEPS:
Curriculum-driven Learning of Stepping Stone Skills". arXiv:2005.04323 (https://arxiv.org/ab
s/2005.04323) [cs.GR (https://arxiv.org/archive/cs.GR)].
8. Vergara, Pedro P.; Salazar, Mauricio; Giraldo, Juan S.; Palensky, Peter (2022). "Optimal
dispatch of PV inverters in unbalanced distribution systems using Reinforcement Learning"
(https://doi.org/10.1016%2Fj.ijepes.2021.107628). International Journal of Electrical Power
& Energy Systems. 136. Bibcode:2022IJEPE.13607628V (https://ui.adsabs.harvard.edu/ab
s/2022IJEPE.13607628V). doi:10.1016/j.ijepes.2021.107628 (https://doi.org/10.1016%2Fj.ij
epes.2021.107628). S2CID 244099841 (https://api.semanticscholar.org/CorpusID:24409984
1).
9. Sutton & Barto 2018, Chapter 11.
10. Ren, Yangang; Jiang, Jianhua; Zhan, Guojian; Li, Shengbo Eben; Chen, Chen; Li, Keqiang;
Duan, Jingliang (2022). "Self-Learned Intelligence for Integrated Decision and Control of
Automated Vehicles at Signalized Intersections" (https://ieeexplore.ieee.org/document/9857
655). IEEE Transactions on Intelligent Transportation Systems. 23 (12): 24145–24156.
arXiv:2110.12359 (https://arxiv.org/abs/2110.12359). doi:10.1109/TITS.2022.3196167 (http
s://doi.org/10.1109%2FTITS.2022.3196167).
11. Gosavi, Abhijit (2003). Simulation-based Optimization: Parametric Optimization Techniques
and Reinforcement (https://www.springer.com/mathematics/applications/book/978-1-4020-7
454-7). Operations Research/Computer Science Interfaces Series. Springer. ISBN 978-1-
4020-7454-7.
12. Burnetas, Apostolos N.; Katehakis, Michael N. (1997), "Optimal adaptive policies for Markov
Decision Processes", Mathematics of Operations Research, 22 (1): 222–255,
doi:10.1287/moor.22.1.222 (https://doi.org/10.1287%2Fmoor.22.1.222), JSTOR 3690147 (ht
tps://www.jstor.org/stable/3690147)
13. Tokic, Michel; Palm, Günther (2011), "Value-Difference Based Exploration: Adaptive Control
Between Epsilon-Greedy and Softmax" (http://www.tokic.com/www/tokicm/publikationen/pap
ers/KI2011.pdf) (PDF), KI 2011: Advances in Artificial Intelligence, Lecture Notes in
Computer Science, vol. 7006, Springer, pp. 335–346, ISBN 978-3-642-24455-1
14. "Reinforcement learning: An introduction" (https://web.archive.org/web/20170712170739/htt
p://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.pdf) (PDF). Archived from the
original (http://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.pdf) (PDF) on 2017-07-
12. Retrieved 2017-07-23.
15. Singh, Satinder P.; Sutton, Richard S. (1996-03-01). "Reinforcement learning with replacing
eligibility traces" (https://link.springer.com/article/10.1007/BF00114726). Machine Learning.
22 (1): 123–158. doi:10.1007/BF00114726 (https://doi.org/10.1007%2FBF00114726).
ISSN 1573-0565 (https://search.worldcat.org/issn/1573-0565).
16. Sutton, Richard S. (1984). Temporal Credit Assignment in Reinforcement Learning (https://w
eb.archive.org/web/20170330002227/http://incompleteideas.net/sutton/publications.html#Ph
Dthesis) (PhD thesis). University of Massachusetts, Amherst, MA. Archived from the original
(http://incompleteideas.net/sutton/publications.html#PhDthesis) on 2017-03-30. Retrieved
2017-03-29.
17. Sutton & Barto 2018, §6. Temporal-Difference Learning (http://incompleteideas.net/sutton/bo
ok/ebook/node60.html).
18. Bradtke, Steven J.; Barto, Andrew G. (1996). "Learning to predict by the method of temporal
differences". Machine Learning. 22: 33–57. CiteSeerX 10.1.1.143.857 (https://citeseerx.ist.p
su.edu/viewdoc/summary?doi=10.1.1.143.857). doi:10.1023/A:1018056104778 (https://doi.o
rg/10.1023%2FA%3A1018056104778). S2CID 20327856 (https://api.semanticscholar.org/C
orpusID:20327856).
19. Watkins, Christopher J.C.H. (1989). Learning from Delayed Rewards (http://www.cs.rhul.ac.
uk/~chrisw/new_thesis.pdf) (PDF) (PhD thesis). King’s College, Cambridge, UK.
20. Matzliach, Barouch; Ben-Gal, Irad; Kagan, Evgeny (2022). "Detection of Static and Mobile
Targets by an Autonomous Agent with Deep Q-Learning Abilities" (https://www.ncbi.nlm.nih.
gov/pmc/articles/PMC9407070). Entropy. 24 (8): 1168. Bibcode:2022Entrp..24.1168M (http
s://ui.adsabs.harvard.edu/abs/2022Entrp..24.1168M). doi:10.3390/e24081168 (https://doi.or
g/10.3390%2Fe24081168). PMC 9407070 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9
407070). PMID 36010832 (https://pubmed.ncbi.nlm.nih.gov/36010832).
21. Williams, Ronald J. (1987). "A class of gradient-estimating algorithms for reinforcement
learning in neural networks". Proceedings of the IEEE First International Conference on
Neural Networks. CiteSeerX 10.1.1.129.8871 (https://citeseerx.ist.psu.edu/viewdoc/summar
y?doi=10.1.1.129.8871).
22. Peters, Jan; Vijayakumar, Sethu; Schaal, Stefan (2003). Reinforcement Learning for
Humanoid Robotics (http://web.archive.org/web/20130512223911/http://www-clmc.usc.edu/
publications/p/peters-ICHR2003.pdf) (PDF). IEEE-RAS International Conference on
Humanoid Robots. Archived from the original (http://www-clmc.usc.edu/publications/p/peters
-ICHR2003.pdf) (PDF) on 2013-05-12.
23. Juliani, Arthur (2016-12-17). "Simple Reinforcement Learning with Tensorflow Part 8:
Asynchronous Actor-Critic Agents (A3C)" (https://medium.com/emergent-future/simple-reinfo
rcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f
2). Medium. Retrieved 2018-02-22.
24. Deisenroth, Marc Peter; Neumann, Gerhard; Peters, Jan (2013). A Survey on Policy Search
for Robotics (http://eprints.lincoln.ac.uk/28029/1/PolicySearchReview.pdf) (PDF).
Foundations and Trends in Robotics. Vol. 2. NOW Publishers. pp. 1–142.
doi:10.1561/2300000021 (https://doi.org/10.1561%2F2300000021). hdl:10044/1/12051 (http
s://hdl.handle.net/10044%2F1%2F12051).
25. Sutton, Richard (1990). "Integrated Architectures for Learning, Planning and Reacting based
on Dynamic Programming". Machine Learning: Proceedings of the Seventh International
Workshop.
26. Lin, Long-Ji (1992). "Self-improving reactive agents based on reinforcement learning,
planning and teaching" (https://link.springer.com/content/pdf/10.1007/BF00992699.pdf)
(PDF). Machine Learning volume 8. doi:10.1007/BF00992699 (https://doi.org/10.1007%2FB
F00992699).
27. Zou, Lan (2023-01-01), Zou, Lan (ed.), "Chapter 7 - Meta-reinforcement learning" (https://w
ww.sciencedirect.com/science/article/pii/B9780323899314000110), Meta-Learning,
Academic Press, pp. 267–297, doi:10.1016/b978-0-323-89931-4.00011-0 (https://doi.org/10.
1016%2Fb978-0-323-89931-4.00011-0), ISBN 978-0-323-89931-4, retrieved 2023-11-08
28. van Hasselt, Hado; Hessel, Matteo; Aslanides, John (2019). "When to use parametric
models in reinforcement learning?" (https://proceedings.neurips.cc/paper/2019/file/1b742ae
215adf18b75449c6e272fd92d-Paper.pdf) (PDF). Advances in Neural Information
Processing Systems 32.
29. Grondman, Ivo; Vaandrager, Maarten; Busoniu, Lucian; Babuska, Robert; Schuitema, Erik
(2012-06-01). "Efficient Model Learning Methods for Actor–Critic Control" (https://dl.acm.org/
doi/10.1109/TSMCB.2011.2170565). IEEE Transactions on Systems, Man, and Cybernetics,
Part B (Cybernetics). 42 (3): 591–602. doi:10.1109/TSMCB.2011.2170565 (https://doi.org/1
0.1109%2FTSMCB.2011.2170565). ISSN 1083-4419 (https://search.worldcat.org/issn/1083-
4419). PMID 22156998 (https://pubmed.ncbi.nlm.nih.gov/22156998).
30. "On the Use of Reinforcement Learning for Testing Game Mechanics : ACM - Computers in
Entertainment" (https://cie.acm.org/articles/use-reinforcements-learning-testing-game-mech
anics/). cie.acm.org. Retrieved 2018-11-27.
31. Riveret, Regis; Gao, Yang (2019). "A probabilistic argumentation framework for
reinforcement learning agents". Autonomous Agents and Multi-Agent Systems. 33 (1–2):
216–274. doi:10.1007/s10458-019-09404-2 (https://doi.org/10.1007%2Fs10458-019-09404-
2). S2CID 71147890 (https://api.semanticscholar.org/CorpusID:71147890).
32. Yamagata, Taku; McConville, Ryan; Santos-Rodriguez, Raul (2021-11-16). "Reinforcement
Learning with Feedback from Multiple Humans with Diverse Skills". arXiv:2111.08596 (http
s://arxiv.org/abs/2111.08596) [cs.LG (https://arxiv.org/archive/cs.LG)].
33. Kulkarni, Tejas D.; Narasimhan, Karthik R.; Saeedi, Ardavan; Tenenbaum, Joshua B. (2016).
"Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic
Motivation" (http://dl.acm.org/citation.cfm?id=3157382.3157509). Proceedings of the 30th
International Conference on Neural Information Processing Systems. NIPS'16. USA: Curran
Associates Inc.: 3682–3690. arXiv:1604.06057 (https://arxiv.org/abs/1604.06057).
Bibcode:2016arXiv160406057K (https://ui.adsabs.harvard.edu/abs/2016arXiv160406057K).
ISBN 978-1-5108-3881-9.
34. "Reinforcement Learning / Successes of Reinforcement Learning" (http://umichrl.pbworks.co
m/Successes-of-Reinforcement-Learning/). umichrl.pbworks.com. Retrieved 2017-08-06.
35. Dey, Somdip; Singh, Amit Kumar; Wang, Xiaohang; McDonald-Maier, Klaus (March 2020).
"User Interaction Aware Reinforcement Learning for Power and Thermal Efficiency of CPU-
GPU Mobile MPSoCs" (https://ieeexplore.ieee.org/document/9116294). 2020 Design,
Automation & Test in Europe Conference & Exhibition (DATE) (http://repository.essex.ac.uk/
27546/1/User%20Interaction%20Aware%20Reinforcement%20Learning.pdf) (PDF).
pp. 1728–1733. doi:10.23919/DATE48585.2020.9116294 (https://doi.org/10.23919%2FDAT
E48585.2020.9116294). ISBN 978-3-9819263-4-7. S2CID 219858480 (https://api.semantics
cholar.org/CorpusID:219858480).
36. Quested, Tony. "Smartphones get smarter with Essex innovation" (https://www.businesswee
kly.co.uk/news/academia-research/smartphones-get-smarter-essex-innovation). Business
Weekly. Retrieved 2021-06-17.
37. Williams, Rhiannon (2020-07-21). "Future smartphones 'will prolong their own battery life by
monitoring owners' behaviour' " (https://inews.co.uk/news/technology/future-smartphones-pr
olong-battery-life-monitoring-behaviour-558689). i. Retrieved 2021-06-17.
38. Kaplan, F.; Oudeyer, P. (2004). "Maximizing Learning Progress: An Internal Reward System
for Development". In Iida, F.; Pfeifer, R.; Steels, L.; Kuniyoshi, Y. (eds.). Embodied Artificial
Intelligence. Lecture Notes in Computer Science. Vol. 3139. Berlin; Heidelberg: Springer.
pp. 259–270. doi:10.1007/978-3-540-27833-7_19 (https://doi.org/10.1007%2F978-3-540-27
833-7_19). ISBN 978-3-540-22484-6. S2CID 9781221 (https://api.semanticscholar.org/Corp
usID:9781221).
39. Klyubin, A.; Polani, D.; Nehaniv, C. (2008). "Keep your options open: an information-based
driving principle for sensorimotor systems" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2
607028). PLOS ONE. 3 (12): e4018. Bibcode:2008PLoSO...3.4018K (https://ui.adsabs.harv
ard.edu/abs/2008PLoSO...3.4018K). doi:10.1371/journal.pone.0004018 (https://doi.org/10.1
371%2Fjournal.pone.0004018). PMC 2607028 (https://www.ncbi.nlm.nih.gov/pmc/articles/P
MC2607028). PMID 19107219 (https://pubmed.ncbi.nlm.nih.gov/19107219).
40. Barto, A. G. (2013). "Intrinsic motivation and reinforcement learning". Intrinsically Motivated
Learning in Natural and Artificial Systems (https://people.cs.umass.edu/~barto/IMCleVer-cha
pter-totypeset2.pdf) (PDF). Berlin; Heidelberg: Springer. pp. 17–47.
41. Dabérius, Kevin; Granat, Elvin; Karlsson, Patrik (2020). "Deep Execution - Value and Policy
Based Reinforcement Learning for Trading and Beating Market Benchmarks". The Journal
of Machine Learning in Finance. 1. SSRN 3374766 (https://papers.ssrn.com/sol3/papers.cf
m?abstract_id=3374766).
42. George Karimpanal, Thommen; Bouffanais, Roland (2019). "Self-organizing maps for
storage and transfer of knowledge in reinforcement learning". Adaptive Behavior. 27 (2):
111–126. arXiv:1811.08318 (https://arxiv.org/abs/1811.08318).
doi:10.1177/1059712318818568 (https://doi.org/10.1177%2F1059712318818568).
ISSN 1059-7123 (https://search.worldcat.org/issn/1059-7123). S2CID 53774629 (https://api.
semanticscholar.org/CorpusID:53774629).
43. J Duan; Y Guan; S Li (2021). "Distributional Soft Actor-Critic: Off-policy reinforcement
learning for addressing value estimation errors" (https://ieeexplore.ieee.org/document/94483
60). IEEE Transactions on Neural Networks and Learning Systems. 33 (11): 6584–6598.
arXiv:2001.02811 (https://arxiv.org/abs/2001.02811). doi:10.1109/TNNLS.2021.3082568 (htt
ps://doi.org/10.1109%2FTNNLS.2021.3082568). PMID 34101599 (https://pubmed.ncbi.nlm.
nih.gov/34101599). S2CID 211259373 (https://api.semanticscholar.org/CorpusID:21125937
3).
44. Y Ren; J Duan; S Li (2020). "Improving Generalization of Reinforcement Learning with
Minimax Distributional Soft Actor-Critic" (https://ieeexplore.ieee.org/document/9294300).
2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).
pp. 1–6. arXiv:2002.05502 (https://arxiv.org/abs/2002.05502).
doi:10.1109/ITSC45102.2020.9294300 (https://doi.org/10.1109%2FITSC45102.2020.92943
00). ISBN 978-1-7281-4149-7. S2CID 211096594 (https://api.semanticscholar.org/CorpusI
D:211096594).
45. Duan, J; Wang, W; Xiao, L (2023-10-26). "DSAC-T: Distributional Soft Actor-Critic with Three
Refinements". arXiv:2310.05858 (https://arxiv.org/abs/2310.05858) [cs.LG (https://arxiv.org/
archive/cs.LG)].
46. Soucek, Branko (6 May 1992). Dynamic, Genetic and Chaotic Programming: The Sixth-
Generation Computer Technology Series. John Wiley & Sons, Inc. p. 38. ISBN 0-471-
55717-X.
47. Francois-Lavet, Vincent; et al. (2018). "An Introduction to Deep Reinforcement Learning".
Foundations and Trends in Machine Learning. 11 (3–4): 219–354. arXiv:1811.12560 (https://
arxiv.org/abs/1811.12560). Bibcode:2018arXiv181112560F (https://ui.adsabs.harvard.edu/a
bs/2018arXiv181112560F). doi:10.1561/2200000071 (https://doi.org/10.1561%2F22000000
71). S2CID 54434537 (https://api.semanticscholar.org/CorpusID:54434537).
48. Mnih, Volodymyr; et al. (2015). "Human-level control through deep reinforcement learning".
Nature. 518 (7540): 529–533. Bibcode:2015Natur.518..529M (https://ui.adsabs.harvard.edu/
abs/2015Natur.518..529M). doi:10.1038/nature14236 (https://doi.org/10.1038%2Fnature142
36). PMID 25719670 (https://pubmed.ncbi.nlm.nih.gov/25719670). S2CID 205242740 (http
s://api.semanticscholar.org/CorpusID:205242740).
49. Goodfellow, Ian; Shlens, Jonathan; Szegedy, Christian (2015). "Explaining and Harnessing
Adversarial Examples". International Conference on Learning Representations.
arXiv:1412.6572 (https://arxiv.org/abs/1412.6572).
50. Behzadan, Vahid; Munir, Arslan (2017). "Vulnerability of Deep Reinforcement Learning to
Policy Induction Attacks". Machine Learning and Data Mining in Pattern Recognition.
Lecture Notes in Computer Science. Vol. 10358. pp. 262–275. arXiv:1701.04143 (https://arxi
v.org/abs/1701.04143). doi:10.1007/978-3-319-62416-7_19 (https://doi.org/10.1007%2F978
-3-319-62416-7_19). ISBN 978-3-319-62415-0. S2CID 1562290 (https://api.semanticschola
r.org/CorpusID:1562290).
51. Huang, Sandy; Papernot, Nicolas; Goodfellow, Ian; Duan, Yan; Abbeel, Pieter (2017-02-07).
Adversarial Attacks on Neural Network Policies (http://worldcat.org/oclc/1106256905).
OCLC 1106256905 (https://search.worldcat.org/oclc/1106256905).
52. Korkmaz, Ezgi (2022). "Deep Reinforcement Learning Policies Learn Shared Adversarial
Features Across MDPs" (https://doi.org/10.1609%2Faaai.v36i7.20684). Thirty-Sixth AAAI
Conference on Artificial Intelligence (AAAI-22). 36 (7): 7229–7238. arXiv:2112.09025 (http
s://arxiv.org/abs/2112.09025). doi:10.1609/aaai.v36i7.20684 (https://doi.org/10.1609%2Faa
ai.v36i7.20684). S2CID 245219157 (https://api.semanticscholar.org/CorpusID:245219157).
53. Berenji, H.R. (1994). "Fuzzy Q-learning: A new approach for fuzzy dynamic programming" (h
ttps://ieeexplore.ieee.org/document/343737). Proceedings of 1994 IEEE 3rd International
Fuzzy Systems Conference. Orlando, FL, USA: IEEE. pp. 486–491.
doi:10.1109/FUZZY.1994.343737 (https://doi.org/10.1109%2FFUZZY.1994.343737).
ISBN 0-7803-1896-X. S2CID 56694947 (https://api.semanticscholar.org/CorpusID:5669494
7).
54. Vincze, David (2017). "Fuzzy rule interpolation and reinforcement learning" (http://users.iit.u
ni-miskolc.hu/~vinczed/research/vinczed_sami2017_author_draft.pdf) (PDF). 2017 IEEE
15th International Symposium on Applied Machine Intelligence and Informatics (SAMI).
IEEE. pp. 173–178. doi:10.1109/SAMI.2017.7880298 (https://doi.org/10.1109%2FSAMI.201
7.7880298). ISBN 978-1-5090-5655-2. S2CID 17590120 (https://api.semanticscholar.org/Co
rpusID:17590120).
55. Ng, A. Y.; Russell, S. J. (2000). "Algorithms for Inverse Reinforcement Learning" (https://ai.st
anford.edu/~ang/papers/icml00-irl.pdf) (PDF). Proceeding ICML '00 Proceedings of the
Seventeenth International Conference on Machine Learning. pp. 663–670. ISBN 1-55860-
707-2.
56. Ziebart, Brian D.; Maas, Andrew; Bagnell, J. Andrew; Dey, Anind K. (2008-07-13).
"Maximum entropy inverse reinforcement learning" (https://dl.acm.org/doi/10.5555/1620270.
1620297). Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3.
AAAI'08. Chicago, Illinois: AAAI Press: 1433–1438. ISBN 978-1-57735-368-3.
S2CID 336219 (https://api.semanticscholar.org/CorpusID:336219).
57. Pitombeira-Neto, Anselmo R.; Santos, Helano P.; Coelho da Silva, Ticiana L.; de Macedo,
José Antonio F. (March 2024). "Trajectory modeling via random utility inverse reinforcement
learning" (https://doi.org/10.1016/j.ins.2024.120128). Information Sciences. 660: 120128.
arXiv:2105.12092 (https://arxiv.org/abs/2105.12092). doi:10.1016/j.ins.2024.120128 (https://
doi.org/10.1016%2Fj.ins.2024.120128). ISSN 0020-0255 (https://search.worldcat.org/issn/0
020-0255). S2CID 235187141 (https://api.semanticscholar.org/CorpusID:235187141).
58. Hayes C, Radulescu R, Bargiacchi E, et al. (2022). "A practical guide to multi-objective
reinforcement learning and planning" (https://doi.org/10.1007%2Fs10458-022-09552-y).
Autonomous Agents and Multi-Agent Systems. 36. arXiv:2103.09568 (https://arxiv.org/abs/2
103.09568). doi:10.1007/s10458-022-09552-y (https://doi.org/10.1007%2Fs10458-022-0955
2-y). S2CID 254235920 (https://api.semanticscholar.org/CorpusID:254235920).,
59. Tzeng, Gwo-Hshiung; Huang, Jih-Jeng (2011). Multiple Attribute Decision Making: Methods
and Applications (1st ed.). CRC Press. ISBN 9781439861578.
60. García, Javier; Fernández, Fernando (1 January 2015). "A comprehensive survey on safe
reinforcement learning" (https://jmlr.org/papers/volume16/garcia15a/garcia15a.pdf) (PDF).
The Journal of Machine Learning Research. 16 (1): 1437–1480.
61. Dabney, Will; Ostrovski, Georg; Silver, David; Munos, Remi (2018-07-03). "Implicit Quantile
Networks for Distributional Reinforcement Learning" (https://proceedings.mlr.press/v80/dabn
ey18a.html). Proceedings of the 35th International Conference on Machine Learning. PMLR:
1096–1105. arXiv:1806.06923 (https://arxiv.org/abs/1806.06923).
62. Chow, Yinlam; Tamar, Aviv; Mannor, Shie; Pavone, Marco (2015). "Risk-Sensitive and
Robust Decision-Making: a CVaR Optimization Approach" (https://proceedings.neurips.cc/pa
per/2015/hash/64223ccf70bbb65a3a4aceac37e21016-Abstract.html). Advances in Neural
Information Processing Systems. 28. Curran Associates, Inc. arXiv:1506.02188 (https://arxi
v.org/abs/1506.02188).
63. "Train Hard, Fight Easy: Robust Meta Reinforcement Learning" (https://scholar.google.com/
citations?view_op=view_citation&hl=en&user=LnwyFkkAAAAJ&citation_for_view=LnwyFkk
AAAAJ:eQOLeE2rZwMC). scholar.google.com. Retrieved 2024-06-21.
64. Tamar, Aviv; Glassner, Yonatan; Mannor, Shie (2015-02-21). "Optimizing the CVaR via
Sampling" (https://ojs.aaai.org/index.php/AAAI/article/view/9561). Proceedings of the AAAI
Conference on Artificial Intelligence. 29 (1). arXiv:1404.3862 (https://arxiv.org/abs/1404.386
2). doi:10.1609/aaai.v29i1.9561 (https://doi.org/10.1609%2Faaai.v29i1.9561). ISSN 2374-
3468 (https://search.worldcat.org/issn/2374-3468).
65. Greenberg, Ido; Chow, Yinlam; Ghavamzadeh, Mohammad; Mannor, Shie (2022-12-06).
"Efficient Risk-Averse Reinforcement Learning" (https://proceedings.neurips.cc/paper_files/p
aper/2022/hash/d2511dfb731fa336739782ba825cd98c-Abstract-Conference.html).
Advances in Neural Information Processing Systems. 35: 32639–32652. arXiv:2205.05138
(https://arxiv.org/abs/2205.05138).
66. Bozinovski, S. (1982). "A self-learning system using secondary reinforcement". In Trappl,
Robert (ed.). Cybernetics and Systems Research: Proceedings of the Sixth European
Meeting on Cybernetics and Systems Research. North-Holland. pp. 397–402. ISBN 978-0-
444-86488-8
67. Bozinovski S. (1995) "Neuro genetic agents and structural theory of self-reinforcement
learning systems". CMPSCI Technical Report 95-107, University of Massachusetts at
Amherst [1] (https://web.cs.umass.edu/publication/docs/1995/UM-CS-1995-107.pdf)
68. Bozinovski, S. (2014) "Modeling mechanisms of cognition-emotion interaction in artificial
neural networks, since 1981." Procedia Computer Science p. 255-263
69. Engstrom, Logan; Ilyas, Andrew; Santurkar, Shibani; Tsipras, Dimitris; Janoos, Firdaus;
Rudolph, Larry; Madry, Aleksander (2019-09-25). "Implementation Matters in Deep RL: A
Case Study on PPO and TRPO" (https://openreview.net/forum?id=r1etN1rtPB). ICLR.
70. Colas, Cédric (2019-03-06). "A Hitchhiker's Guide to Statistical Comparisons of
Reinforcement Learning Algorithms" (https://openreview.net/forum?id=ryx0N3IaIV).
International Conference on Learning Representations. arXiv:1904.06979 (https://arxiv.org/a
bs/1904.06979).
71. Greenberg, Ido; Mannor, Shie (2021-07-01). "Detecting Rewards Deterioration in Episodic
Reinforcement Learning" (https://proceedings.mlr.press/v139/greenberg21a.html).
Proceedings of the 38th International Conference on Machine Learning. PMLR: 3842–3853.
arXiv:2010.11660 (https://arxiv.org/abs/2010.11660).
Further reading
Annaswamy, Anuradha M. (3 May 2023). "Adaptive Control and Intersections with
Reinforcement Learning" (https://doi.org/10.1146%2Fannurev-control-062922-090153).
Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 65–93.
doi:10.1146/annurev-control-062922-090153 (https://doi.org/10.1146%2Fannurev-control-06
2922-090153). ISSN 2573-5144 (https://search.worldcat.org/issn/2573-5144).
S2CID 255702873 (https://api.semanticscholar.org/CorpusID:255702873).
Auer, Peter; Jaksch, Thomas; Ortner, Ronald (2010). "Near-optimal regret bounds for
reinforcement learning" (http://jmlr.csail.mit.edu/papers/v11/jaksch10a.html). Journal of
Machine Learning Research. 11: 1563–1600.
Bertsekas, Dimitri P. (2023) [2019]. REINFORCEMENT LEARNING AND OPTIMAL
CONTROL (http://www.mit.edu/~dimitrib/RLbook.html) (1st ed.). Athena Scientific.
ISBN 978-1-886-52939-7.
Busoniu, Lucian; Babuska, Robert; De Schutter, Bart; Ernst, Damien (2010). Reinforcement
Learning and Dynamic Programming using Function Approximators (http://www.dcsc.tudelft.
nl/rlbook/). Taylor & Francis CRC Press. ISBN 978-1-4398-2108-4.
François-Lavet, Vincent; Henderson, Peter; Islam, Riashat; Bellemare, Marc G.; Pineau,
Joelle (2018). "An Introduction to Deep Reinforcement Learning". Foundations and Trends
in Machine Learning. 11 (3–4): 219–354. arXiv:1811.12560 (https://arxiv.org/abs/1811.1256
0). Bibcode:2018arXiv181112560F (https://ui.adsabs.harvard.edu/abs/2018arXiv181112560
F). doi:10.1561/2200000071 (https://doi.org/10.1561%2F2200000071). S2CID 54434537 (ht
tps://api.semanticscholar.org/CorpusID:54434537).
Li, Shengbo Eben (2023). Reinforcement Learning for Sequential Decision and Optimal
Control (https://link.springer.com/book/10.1007/978-981-19-7784-8) (1st ed.). Springer
Verlag, Singapore. doi:10.1007/978-981-19-7784-8 (https://doi.org/10.1007%2F978-981-19-
7784-8). ISBN 978-9-811-97783-1.
Powell, Warren (2011). Approximate dynamic programming: solving the curses of
dimensionality (https://web.archive.org/web/20160731230325/http://castlelab.princeton.edu/
adp.htm). Wiley-Interscience. Archived from the original (http://www.castlelab.princeton.edu/
adp.htm) on 2016-07-31. Retrieved 2010-09-08.
Sutton, Richard S. (1988). "Learning to predict by the method of temporal differences" (http
s://doi.org/10.1007%2FBF00115009). Machine Learning. 3: 9–44. doi:10.1007/BF00115009
(https://doi.org/10.1007%2FBF00115009).
Sutton, Richard S.; Barto, Andrew G. (2018) [1998]. Reinforcement Learning: An
Introduction (http://incompleteideas.net/sutton/book/the-book.html) (2nd ed.). MIT Press.
ISBN 978-0-262-03924-6.
Szita, Istvan; Szepesvari, Csaba (2010). "Model-based Reinforcement Learning with Nearly
Tight Exploration Complexity Bounds" (https://web.archive.org/web/20100714095438/http://
www.icml2010.org/papers/546.pdf) (PDF). ICML 2010. Omnipress. pp. 1031–1038. Archived
from the original (http://www.icml2010.org/papers/546.pdf) (PDF) on 2010-07-14.
External links
Dissecting Reinforcement Learning (https://mpatacchiola.github.io/blog/2016/12/09/dissectin
g-reinforcement-learning.html) Series of blog post on reinforcement learning with Python
code
A (Long) Peek into Reinforcement Learning (https://lilianweng.github.io/posts/2018-02-19-rl-
overview/)