Papers
arxiv:2310.02671

Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy Gradient Methods

Published on Oct 4, 2023
Authors:
,
,

Abstract

Markov Decision Processes (MDPs) are a formal framework for modeling and solving sequential decision-making problems. In finite-time horizons such problems are relevant for instance for optimal stopping or specific supply chain problems, but also in the training of large language models. In contrast to infinite horizon MDPs optimal policies are not stationary, policies must be learned for every single epoch. In practice all parameters are often trained simultaneously, ignoring the inherent structure suggested by dynamic programming. This paper introduces a combination of dynamic programming and policy gradient called dynamic <PRE_TAG>policy gradient</POST_TAG>, where the parameters are trained backwards in time. For the tabular softmax parametrisation we carry out the convergence analysis for simultaneous and dynamic <PRE_TAG>policy gradient</POST_TAG> towards global optima, both in the exact and sampled gradient settings without regularisation. It turns out that the use of dynamic <PRE_TAG>policy gradient</POST_TAG> training much better exploits the structure of finite-time problems which is reflected in improved convergence bounds.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2310.02671 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2310.02671 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2310.02671 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.