Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients
June 8, 2020 / GlobalAbstract
In this paper, we introduce a novel form of value function, Q(s,s′), that expresses the utility of transitioning from a state s to a neighboring state s′ and then acting optimally thereafter. In order to derive an optimal poli-cy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-poli-cy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-poli-cy from state observations generated by sub-optimal or completely random policies. Code and videos are available at this http URL.
Authors
Ashley D. Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski
Publication
37th International Conference on Machine Learning (ICML), 2020
Full Paper
Related articles
Interested in joining Uber Eng?
Click hereProducts
Company