Unconstrained Online Learning with Unbounded Losses
Abstract
Algorithms for online learning typically require one or more boundedness assumptions: that the domain is bounded, that the losses are Lipschitz, or both. In this paper, we develop a new setting for online learning with unbounded domains and non-<PRE_TAG>Lipschitz losses</POST_TAG>. For this setting we provide an algorithm which guarantees R_{T}(u)le tilde O(G|u|T+L|u|^{2}T) regret on any problem where the subgradients satisfy |g_{t}|le G+L|w_{t}|, and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic <PRE_TAG>regret</POST_TAG> in an unbounded domain for non-<PRE_TAG>Lipschitz losses</POST_TAG>, as well as a matching lower bound. The regret of our dynamic <PRE_TAG>regret</POST_TAG> algorithm automatically improves to a novel L^{*} bound when the losses are smooth.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper