Papers
arxiv:2306.04923

Unconstrained Online Learning with Unbounded Losses

Published on Jun 8, 2023
Authors:
,

Abstract

Algorithms for online learning typically require one or more boundedness assumptions: that the domain is bounded, that the losses are Lipschitz, or both. In this paper, we develop a new setting for online learning with unbounded domains and non-<PRE_TAG>Lipschitz losses</POST_TAG>. For this setting we provide an algorithm which guarantees R_{T}(u)le tilde O(G|u|T+L|u|^{2}T) regret on any problem where the subgradients satisfy |g_{t}|le G+L|w_{t}|, and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic <PRE_TAG>regret</POST_TAG> in an unbounded domain for non-<PRE_TAG>Lipschitz losses</POST_TAG>, as well as a matching lower bound. The regret of our dynamic <PRE_TAG>regret</POST_TAG> algorithm automatically improves to a novel L^{*} bound when the losses are smooth.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2306.04923 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2306.04923 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2306.04923 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.