forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 1
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
PfRAkMOFDc | Using a Learned Policy Basis to Optimally Solve Reward Machines | [
"Guillermo Infante",
"David Kuric",
"Vicenç Gómez",
"Anders Jonsson",
"Herke van Hoof"
] | Conventional reinforcement learning (RL) methods can successfully solve a wide range of sequential decision problems. However, learning policies that can generalize predictably across multiple tasks in a setting with non-Markovian reward specifications is a challenging problem. We propose to use successor features (SF) to learn a policy basis so that each (sub)policy in it solves a well-defined subproblem. In a task described by a reward machine (RM) that involves the same set of subproblems, the combination of these (sub)policies can then be used to generate an optimal solution without additional learning. In contrast to other methods that combine (sub)policies via planning, our method asymptotically attains global optimality, even in stochastic environments. | [
"reward machines",
"hierarchical reinforcement learning",
"composable rl",
"non-Markovian rl"
] | https://openreview.net/pdf?id=PfRAkMOFDc | 4XgbVk1lGq | decision | 1,722,287,919,568 | PfRAkMOFDc | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
PZCG5UI2zy | The challenge of continuous MDPs: is no-regret learning feasible? | [
"Davide Maran",
"Alberto Maria Metelli",
"Matteo Papini",
"Marcello Restelli"
] | Achieving the no-regret property for Reinforcement Learning (RL) problems in continuous state and action-space environments is one of the major open problems in the field. Existing solutions either work under very specific assumptions or achieve bounds that are vacuous in some regimes. Furthermore, many structural assumptions
are known to suffer from a provably unavoidable exponential dependence on the time horizon $H$ in the regret, which makes any possible solution unfeasible in practice.
In this paper, we identify \textit{local linearity} as the feature that makes Markov Decision Processes (MDPs) both \textit{learnable} (sublinear regret) and \textit{feasible} (regret that is polynomial in $H$).
We define a novel MDP representation class, namely \textit{Locally Linearizable MDPs}, generalizing other representation classes like Linear MDPs and MDPS with low inherent Belmman error.
Then, i) we introduce \textsc{Cinderella}, a no-regret algorithm for this general representation class, and ii) we show that all known learnable and feasible MDP families are representable in this class.
We first show that all known feasible MDPs belong to a family that we call \textit{Mildly Smooth MDPs}. Then, we show how any mildly smooth MDP can be represented as a Locally Linearizable MDP by an appropriate choice of representation. This way, \textsc{Cinderella} is shown to achieve state-of-the-art regret bounds for all previously known (and some new) continuous MDPs for which RL is learnable and feasible. | [
"continuous mdps",
"smoothness",
"representation",
"regret"
] | https://openreview.net/pdf?id=PZCG5UI2zy | hQPXi9ofBp | decision | 1,722,287,916,389 | PZCG5UI2zy | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
PPIiNQWX63 | MetaCURL: Non-stationary Concave Utility Reinforcement Learning | [
"Bianca Marin Moreno",
"Margaux Brégère",
"Pierre Gaillard",
"Nadia Oudjane"
] | We explore online learning in episodic loop-free Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in state-action distributions induced by agent policies. While various machine learning problems can be written as CURL, its non-linearity invalidates traditional Bellman equations. Despite recent solutions to classical CURL, none address non-stationary MDPs. This paper introduces MetaCURL, the first CURL algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple black-box algorithms instances over different intervals, aggregating outputs via a sleeping expert framework. The key hurdle is partial information due to MDP uncertainty. Under partial information on the probability transitions (uncertainty and non-stationarity coming only from external noise, independent of agent state-action pairs), we achieve optimal dynamic regret without prior knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full adversarial losses, not just stochastic ones. We believe our approach for managing non-stationarity with experts can be of interest to the RL community. | [
"Concave utility reinforcement learning",
"non-stationary MDPs",
"learning with expert advice",
"online learning"
] | https://openreview.net/pdf?id=PPIiNQWX63 | SFmQFPCSjq | decision | 1,722,287,916,183 | PPIiNQWX63 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
P0kRLiTx04 | Following Ancestral Footsteps: Co-Designing Morphology and Behaviour with Self-Imitation Learning | [
"Sergio Hernández-Gutiérrez",
"Ville Kyrki",
"Kevin Sebastian Luck"
] | In this paper we consider the problem of co-adapting the body and behaviour of agents, a long-standing research problem in the community of evolutionary robotics. Previous work has largely focused on the development of methods exploiting massive parallelization of agent evaluations with large population sizes, a paradigm which is not applicable to the real world. More recent data-efficient approaches utilizing reinforcement learning can suffer from distributional shifts in transition dynamics as well as in state and action spaces when experiencing new body morphologies. In this work, we propose a new co-adaptation method combining reinforcement learning and State-Aligned Self-Imitation Learning. We show that the integration of a self-imitation signal improves the data-efficiency of the co-adaptation process as well as the behavioural recovery when adapting morphological parameters. | [
"Inverse Reinforcement Learning",
"Co-Design",
"Evolutionary Robotics",
"Imitation Learning"
] | https://openreview.net/pdf?id=P0kRLiTx04 | xiF8TvCPS1 | decision | 1,722,287,920,385 | P0kRLiTx04 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
OcVU47V7MA | Exploring Pessimism and Optimism Dynamics in Deep Reinforcement Learning | [
"Bahareh Tasdighi",
"Nicklas Werge",
"Yi-Shan Wu",
"Melih Kandemir"
] | Off-policy actor-critic algorithms have shown promise in deep reinforcement learning for continuous control tasks. Their success largely stems from leveraging pessimistic state-action value function updates, which effectively address function approximation errors and improve performance. However, such pessimism can lead to under-exploration, constraining the agent's ability to explore/refine its policies. Conversely, optimism can counteract under-exploration, but it also carries the risk of excessive risk-taking and poor convergence if not properly balanced. Based on these insights, we introduce Utility Soft Actor-Critic (USAC), a novel framework within the actor-critic paradigm that enables independent control over the degree of pessimism/optimism for both the actor and the critic via interpretable parameters. USAC adapts its exploration strategy based on the uncertainty of critics through a utility function that allows us to balance between pessimism and optimism separately. By going beyond binary choices of optimism and pessimism, USAC represents a significant step towards achieving balance within off-policy actor-critic algorithms. Our experiments across various continuous control problems show that the degree of pessimism or optimism depends on the nature of the task. Furthermore, we demonstrate that USAC can outperform state-of-the-art algorithms for appropriately configured pessimism/optimism parameters. | [
"machine learning",
"reinforcement learning",
"deep reinforcement learning",
"actor-critic",
"exploration–exploitation",
"pessimism vs. optimism"
] | https://openreview.net/pdf?id=OcVU47V7MA | 5dSsqAJLiA | decision | 1,722,287,916,842 | OcVU47V7MA | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
MghRvi6pj4 | A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with Robustness to Excessive Delays | [
"Saeed Masoudian",
"Julian Zimmert",
"Yevgeny Seldin"
] | We propose a new best-of-both-worlds algorithm for bandits with variably delayed feedback. In contrast to prior work, which required prior knowledge of the maximal delay $d_{\max}$ and had a linear dependence of the regret on it, our algorithm can tolerate arbitrary excessive delays up to order $T$ (where $T$ is the time horizon). The algorithm is based on three technical innovations, which may all be of independent interest: (1) We introduce the first implicit exploration scheme that works in best-of-both-worlds setting. (2) We introduce the first control of distribution drift that does not rely on boundedness of delays. The control is based on the implicit exploration scheme and adaptive skipping of observations with excessive delays. (3) We introduce a procedure relating standard regret with drifted regret that does not rely on boundedness of delays. At the conceptual level, we demonstrate that complexity of best-of-both-worlds bandits with delayed feedback is characterized by the amount of information missing at the time of decision making (measured by the number of outstanding observations) rather than the time that the information is missing (measured by the delays). | [
"Multiarmed bandits",
"Delayed feedback",
"Best-of-both-worlds"
] | https://openreview.net/pdf?id=MghRvi6pj4 | mn6FXEXaNW | decision | 1,722,287,915,810 | MghRvi6pj4 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
MTzyiFmMWq | Online Planning in POMDPs with State-Requests | [
"Raphaël Avalos",
"Eugenio Bargiacchi",
"Ann Nowe",
"Diederik Roijers",
"Frans A Oliehoek"
] | In key real-world problems, full state information is sometimes available but at a high cost, like activating precise yet energy-intensive sensors or consulting humans, thereby compelling the agent to operate under partial observability. For this scenario, we propose AEMS-SR (Anytime Error Minimization Search with State Requests), a principled online planning algorithm tailored for POMDPs with state requests. By representing the search space as a graph instead of a tree, AEMS-SR avoids the exponential growth of the search space originating from state requests. Theoretical analysis demonstrates AEMS-SR's $\varepsilon$-optimality, ensuring solution quality, while empirical evaluations illustrate its effectiveness compared with AEMS and POMCP, two SOTA online planning algorithms. AEMS-SR enables efficient planning in domains characterized by partial observability and costly state requests offering practical benefits across various applications. | [
"Planning",
"POMDP",
"State-Request",
"AEMS"
] | https://openreview.net/pdf?id=MTzyiFmMWq | B8bCWMtTKK | decision | 1,722,287,917,011 | MTzyiFmMWq | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
M9bvAsc9m6 | Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts | [
"Ahmed Hendawy",
"Jan Peters",
"Carlo D'Eramo"
] | Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit similarities in terms of skills, objects, or physical properties while leveraging their representations eases the achievement of a universal policy. Nevertheless, the pursuit of learning a shared set of diverse representations is still an open challenge. In this paper, we introduce a novel approach for representation learning in MTRL that encapsulates common structures among the tasks using orthogonal representations to promote diversity. Our method, named Mixture Of Orthogonal Experts (MOORE), leverages a Gram-Schmidt process to shape a shared subspace of representations generated by a mixture of experts. When task-specific information is provided, MOORE generates relevant representations from this shared subspace. We assess the effectiveness of our approach on two MTRL benchmarks, namely MiniGrid and MetaWorld, showing that MOORE surpasses related baselines and establishes a new state-of-the-art result on MetaWorld. | [
"Reinforcement Learning",
"Multi-Task Learning",
"Mixture of Experts"
] | https://openreview.net/pdf?id=M9bvAsc9m6 | AHluEahp2i | decision | 1,722,287,916,160 | M9bvAsc9m6 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
M30X4I6cBx | Time-Efficient Reinforcement Learning with Stochastic Stateful Policies | [
"Firas Al-Hafez",
"Guoping Zhao",
"Jan Peters",
"Davide Tateo"
] | Stateful policies play an important role in reinforcement learning, such as handling
partially observable environments, enhancing robustness, or imposing an inductive
bias directly into the policy structure. The conventional method for training stateful
policies is Backpropagation Through Time (BPTT), which comes with significant
drawbacks, such as slow training due to sequential gradient propagation and the
occurrence of vanishing or exploding gradients. The gradient is often truncated
to address these issues, resulting in a biased policy update. We present a novel
approach for training stateful policies by decomposing the latter into a stochastic
internal state kernel and a stateless policy, jointly optimized by following the
stateful policy gradient. We introduce different versions of the stateful policy
gradient theorem, enabling us to easily instantiate stateful variants of popular
reinforcement learning and imitation learning algorithms. Furthermore, we provide
a theoretical analysis of our new gradient estimator and compare it with BPTT.
We evaluate our approach on complex continuous control tasks, e.g. humanoid
locomotion, and demonstrate that our gradient estimator scales effectively with
task complexity while offering a faster and simpler alternative to BPTT. | [
"Reinforcement Learning",
"Recurrent Networks",
"Stateful Policies",
"Imitation Learning",
"Stochastic Stateful Policy Gradient"
] | https://openreview.net/pdf?id=M30X4I6cBx | SYU2rpBmgU | decision | 1,722,287,916,608 | M30X4I6cBx | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
LmJH7LDfRO | Efficient Source Tasks Selection for Zero-shot Transfer in Contextual Reinforcement Learning | [
"Jung-Hoon Cho",
"Vindula Jayawardana",
"Sirui Li",
"Cathy Wu"
] | Deep reinforcement learning is a powerful approach to complex decision-making. However, one issue that limits its practical application is its brittleness, sometimes failing to train in the presence of small changes in the environment. This work is motivated by the empirical observation that directly applying an already trained model to a related task often works remarkably well, also called zero-shot transfer. We take this practical trick one step further to consider how to systematically select good tasks to train, maximizing overall performance across a range of tasks. Given the high cost of training, it is critical to choose a small set of training tasks. The key idea behind our approach is to explicitly model the performance loss (generalization gap) incurred by transferring a trained model. We hence introduce Model-Based Transfer Learning (MBTL) for solving contextual RL problems. In this work, we model the performance loss as a simple linear function of task context similarity. Furthermore, we leverage Bayesian optimization techniques to efficiently model and estimate the unknown training performance of the task space. We theoretically show that the method exhibits sublinear regret in the number of training tasks and discuss conditions to further tighten regret bounds. We experimentally validate our methods using urban traffic and standard control benchmarks. Despite the conceptual simplicity, the experimental results suggest that MBTL can achieve greater performance than strong baselines, including exhaustive training on all tasks, multi-task training, and random selection of training tasks. This work lays the foundations for investigating explicit modeling of generalization, thereby enabling principled yet effective methods for contextual RL. | [
"Contextual Reinforcement Learning",
"Zero-Shot Transfer",
"Generalization",
"Bayesian Optimization",
"Source Task Selection"
] | https://openreview.net/pdf?id=LmJH7LDfRO | OUnuuk0jU3 | decision | 1,722,287,921,317 | LmJH7LDfRO | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
LkYaQzGgfL | Combining Automated Optimisation of Hyperparameters and Reward Shape | [
"Julian Dierkes",
"Emma Cramer",
"Sebastian Trimpe",
"Holger Hoos"
] | There has been significant progress in deep reinforcement learning (RL) in recent years. Nevertheless, finding suitable hyperparameter configurations and reward functions remains challenging even for experts, and performance heavily relies on these design choices. Also, most RL research is conducted on known benchmarks where knowledge about these choices already exists. However, novel practical applications often pose complex tasks for which no prior knowledge about good hyperparameters and reward functions is available, thus necessitating their derivation from scratch. Prior work has examined automatically tuning either hyperparameters or reward functions individually. We demonstrate empirically that an RL algorithm's hyperparameter configurations and reward function are often mutually dependent, meaning neither can be fully optimised without appropriate values for the other. We then propose a methodology for the combined optimisation of hyperparameters and the reward function. Furthermore, we include a variance penalty as an optimisation objective to improve the stability of learned policies. We conducted extensive experiments using Proximal Policy Optimisation and Soft Actor-Critic on four environments. Our results show that combined optimisation significantly improves over baseline performance in half of the environments and achieves competitive performance in the others, with only a minor increase in computational costs. This suggests that combined optimisation should be best practice. | [
"AutoRL",
"Hyperparameter optimisation",
"Reward shape optimisation"
] | https://openreview.net/pdf?id=LkYaQzGgfL | pXwOusYP0F | decision | 1,722,287,921,103 | LkYaQzGgfL | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
L0mqoVBjGj | Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes | [
"Asaf Cassel",
"Aviv Rosenberg"
] | Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice. Recently, Sherman et al. [2023a] proposed a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model. However, their algorithm relies on a costly pure exploration warm-up phase that is hard to implement in practice. This paper eliminates this undesired warm-up phase, replacing it with a simple and efficient contraction mechanism. Our PO algorithm achieves rate-optimal regret with improved dependence on the other parameters of the problem (horizon and function approximation dimension) in two fundamental settings: adversarial losses with full-information feedback and stochastic losses with bandit feedback. | [
"policy optimization",
"reinforcement learning theory",
"regret",
"Markov Decision Process",
"linear MDP"
] | https://openreview.net/pdf?id=L0mqoVBjGj | iwH7hDKpt3 | decision | 1,722,287,916,348 | L0mqoVBjGj | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
KdnHxZvcIw | Bayesian Meta-Reinforcement Learning with Laplace Variational Recurrent Networks | [
"Joery A. de Vries",
"Jinke He",
"Mathijs de Weerdt",
"Matthijs T. J. Spaan"
] | Meta-reinforcement learning trains a single reinforcement learning algorithm on a distribution of tasks to quickly generalize to new tasks outside of the training set at test time. From a Bayesian perspective, one can interpret this as performing amortized variational inference on the posterior distribution over training tasks. Among the various meta-reinforcement learning approaches, a common method is to represent this distribution with a point-estimate using a recurrent neural networks. We show how one can augment this point estimate to give full distributions through the Laplace approximation, either at the start of, during, or after learning, without modifying the base model architecture. With our approximation, we are able to estimate distributional statistics (e.g., the entropy) of non-Bayesian agents and observe that point-estimate based methods produce overconfident estimators while not satisfying consistency. Furthermore, when comparing our approach to full-distribution based learning of the task posterior, we found our method to perform on par with variational inference baselines despite being simpler to implement. | [
"Variational Inference",
"Bayesian Reinforcement Learning",
"Meta-Reinforcement Learning",
"Uncertainty Estimation"
] | https://openreview.net/pdf?id=KdnHxZvcIw | cwbUSx9Soe | decision | 1,722,287,921,205 | KdnHxZvcIw | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
KDiCsArAKs | Controller Synthesis from Deep Reinforcement Learning Policies | [
"Florent Delgrange",
"Guy Avni",
"Anna Lukina",
"Christian Schilling",
"Ann Nowe",
"Guillermo Perez"
] | We propose a novel framework to controller design in environments with a two-level structure: a high-level graph in which each vertex is populated by a Markov decision process, called a ``room'', with several low-level objectives. We proceed as follows. First, we apply deep reinforcement learning (DRL) to obtain low-level policies for each room and objective. Second, we apply reactive synthesis to obtain a planner that selects which low-level policy to apply in each room. Reactive synthesis refers to constructing a planner for a given model of the environment that satisfies a given objective (typically specified as a temporal logic formula) by design. The main advantage of the framework is formal guarantees. In addition, the framework enables a "separation of concerns": low-level tasks are addressed using DRL, which enables scaling to large rooms of unknown dynamics, reward engineering is only done locally, and policies can be reused, whereas users can specify high-level tasks intuitively and naturally. The central challenge in synthesis is the need for a model of the rooms. We address this challenge by developing a DRL procedure to train concise "latent" policies together with latent abstract rooms, both paired with PAC guarantees on performance and abstraction quality. Unlike previous approaches, this circumvents a model distillation step. We demonstrate feasibility in a case study involving agent navigation in an environment with moving obstacles. | [
"deep reinforcement learning",
"reactive synthesis",
"formal methods",
"reach-avoid objectives"
] | https://openreview.net/pdf?id=KDiCsArAKs | dJKYz5jDYk | decision | 1,722,287,919,200 | KDiCsArAKs | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
JweOjS1E3t | Latent Communication for Zero-shot Stitching in Reinforcement Learning | [
"Antonio Pio Ricciardi",
"Valentino Maiorca",
"Luca Moschella",
"Riccardo Marin",
"Emanuele Rodolà"
] | Visual Reinforcement Learning is a popular and powerful framework that takes full advantage of the Deep Learning breakthrough. It is known that variations in the input (e.g., different colors of the panorama due to the season of the year) or task (e.g., changing the target speed of a car) domains could disrupt agents performance, therefore requiring new training. Recent advancements in Latent Communication Theory, show that it is possible to combine components of different neural networks to create new models in a zero-shot fashion. In this paper, we leverage upon such advancements to show that components of agents trained on different visual and task variations can be combined by aligning the latent representations produced by their encoders, to obtain new agents that can act well in visual-task combinations never seen together during training. Our findings open to more efficient training processes, significantly reducing time and computational costs. | [
"visual reinforcement learning",
"reinforcement learning",
"relative representation",
"zero-shot",
"stitching",
"latent communication"
] | https://openreview.net/pdf?id=JweOjS1E3t | 19qa8YhPdt | decision | 1,722,287,917,613 | JweOjS1E3t | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
JSWRnHC93W | APE: An Anti-poaching Multi-Agent Reinforcement Learning Benchmark | [
"Prasanna Maddila",
"Casellas Eric",
"Patrick Chabrier",
"Régis Sabbadin",
"Meritxell Vinyals"
] | Widespread poaching threatens many endangered species today, requiring robust strategies to coordinate ranger patrols and effectively deter poachers within protected areas. Recent research has modeled this problem as a strategic game between rangers and poachers, resulting in anti-poaching becoming a popular application domain within game theory and multi-agent research communities. Unfortunately, the lack of a standard open-source implementation of the anti-poaching game hinders the reproducibility and advancement of current research in the field. This paper aims to fill this gap by providing the first open-source standardised environment for the anti-poaching game. Our contributions are as follows: (1) we formalise anti-poaching as a Partially Observable Stochastic Game; (2) we provide the Anti-Poaching Environment (APE), an open-source Python implementation of a simulator for this game using the PettingZoo API, which is compatible with many existing multi-agent reinforcement learning (MARL) libraries; and (3) we illustrate how to apply deep reinforcement-learning algorithms from the RLlib library, in order to compute cooperative and cooperative-competitive equilibria of APE instances. | [
"anti-poaching",
"multi-agent reinforcement learning",
"PettingZoo environment",
"partially observable stochastic game"
] | https://openreview.net/pdf?id=JSWRnHC93W | 6Bi62L9BRg | decision | 1,722,287,916,869 | JSWRnHC93W | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
J6iiVwFHLc | Generalisation to unseen topologies: Towards control of biological neural network activity | [
"Laurens Engwegen",
"Daan Brinks",
"Wendelin Boehmer"
] | Novel imaging and neurostimulation techniques open doors for advancements in closed-loop control of activity in biological neural networks. This would allow for applications in the investigation of activity propagation, and for diagnosis and treatment of pathological behaviour. Due to the partially observable characteristics of activity propagation, through networks in which edges can not be observed, and the dynamic nature of neuronal systems, there is a need for adaptive, generalisable control. In this paper, we introduce an environment that procedurally generates neuronal networks with different topologies to investigate this generalisation problem. Additionally, an existing transformer-based architecture is adjusted to evaluate the generalisation performance of a deep RL agent in the presented partially observable environment. The agent demonstrates the capability to generalise control from a limited number of training networks to unseen test networks. | [
"Reinforcement learning",
"multitask learning",
"generalisation",
"neuroscience",
"neuronal simulation",
"closed-loop neuronal control"
] | https://openreview.net/pdf?id=J6iiVwFHLc | RNRfOAN2YD | decision | 1,722,287,918,613 | J6iiVwFHLc | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
IxqUJF2XRN | Offline Reinforcement Learning with Pessimistic Value Priors | [
"Filippo Valdettaro",
"Aldo A. Faisal"
] | We mitigate the effect of distribution shift in offline reinforcement learning by regularisation through value function inference with a pessimistic prior as a mechanism to induce critic conservatism and avoid unsupported policies.
By introducing a pessimistic prior on the value of the learned policy and carrying out inference in value function space, the resulting posterior will only have high action-values in regions where these are supported by the dataset.
Regularisation through inference has the potential to be not as aggressively conservative as other forms of regularisation, such as those that try to be robust to worst-case outcomes given the data, while still avoiding out-of-distribution actions.
We develop this approach for continuous control in deterministic environments and propose a way to make it scalable and compatible with deep learning architectures.
As a byproduct of this inference scheme we also obtain consistent Bayesian uncertainty for model-free off-policy evaluation from a non-episodic dataset of individual transitions.
We develop this framework for control in continuous-action environments and present results on a toy environment with exact inference and preliminary results on a scalable, deep version of our framework on a D4RL benchmark robotics task.
Our methods show potential for improved performance on such a task, and suggest that future experimental work on improving training stability of our methods could result in effective offline reinforcement learning algorithms coming from simple modifications of online algorithms. | [
"offline reinforcement learning",
"Bayesian inference",
"Gaussian processes"
] | https://openreview.net/pdf?id=IxqUJF2XRN | mYRMs1d4zP | decision | 1,722,287,920,801 | IxqUJF2XRN | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
IPVSK0j8AO | World Models Increase Autonomy in Reinforcement Learning | [
"Zhao Yang",
"Thomas M. Moerland",
"Mike Preuss",
"Aske Plaat",
"Edward S. Hu"
] | Reinforcement learning (RL) is an appealing paradigm for training intelligent agents, enabling policy acquisition from the agent's own autonomously acquired experience. However, the training process of RL is far from automatic, requiring extensive human effort to reset the agent and environments. To tackle the challenging reset-free setting, we first demonstrate the superiority of model-based (MB) RL methods in such setting, showing that a straightforward application of MBRL can outperform all the prior state-of-the-art methods while requiring less supervision. We then identify limitations inherent to this direct extension and propose a solution called model-based reset-free (MoReFree) agent, which further enhances the performance. MoReFree adapts two key mechanisms, exploration and policy learning, to handle reset-free tasks by prioritizing task-relevant states. It exhibits superior data-efficiency across various reset-free tasks without access to environmental reward or demonstrations while significantly outperforming privileged baselines that require supervision. Our findings suggest model-based methods hold significant promise for reducing human effort in RL. Website: https://sites.google.com/view/morefree | [
"World Models",
"Autonomy",
"Reset-free"
] | https://openreview.net/pdf?id=IPVSK0j8AO | NISQ2lJYNW | decision | 1,722,287,917,389 | IPVSK0j8AO | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
I3CYzJjUYV | Robust Best-of-Both-Worlds Gap Estimators Based on Importance-Weighted Sampling | [
"Sarah Clusiau",
"Saeed Masoudian",
"Yevgeny Seldin"
] | We present a novel strategy for robust estimation of the gaps in multiarmed bandits that is based on importance-weighted sampling. The strategy is applicable in best of-both-worlds setting, namely, it can be used in both stochastic and adversarial regime with no need for prior knowledge of the regime. It is based on a pair of estimators, one based on standard importance weighted sampling to upper bound the losses, and another based on importance weighted sampling with implicit exploration to lower bound the losses. We combine the strategy with the EXP3++ algorithm to achieve best-of-both-worlds regret guarantees in the stochastic and adversarial regimes, and in the stochastically constrained adversarial regime. We conjecture that the strategy can be applied more broadly to robust gap estimation in reinforcement learning, which will be studied in future work. | [
"gap estimation",
"multiarmed bandits",
"importance weighted samples",
"best-of-both-worlds regret guarantees."
] | https://openreview.net/pdf?id=I3CYzJjUYV | jGoGSGs71M | decision | 1,722,287,920,565 | I3CYzJjUYV | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
HmEZsHh589 | Approximate information maximization for bandit games | [
"Alex Barbier Chebbah",
"Christian L Vestergaard",
"Masson",
"Etienne Boursier"
] | Entropy maximization and free energy minimization are general physics principles for modeling dynamic systems. Notable examples include modeling decision-making within the brain using the free-energy principle, optimizing the accuracy-complexity trade-off when accessing hidden variables with the information bottleneck principle (Tishby et al. 2000), and navigation in random environments using information maximization (Vergassola et al. 2007). Building on this principle, we propose a new class of bandit algorithms that maximize an approximation to the information of a key variable within the system. To this end, we develop an approximated, analytical physics-based representation of the entropy to forecast the information gain of each action and greedily choose the one with the largest information gain. This method yields strong performances in classical bandit settings. Motivated by its empirical success, we prove its asymptotic optimality for the multi-armed bandit problem with Gaussian rewards. Since it encompasses the system's properties in a single, global functional, this approach can be efficiently adapted to more complex bandit settings. This calls for further investigation of information maximization approaches for multi-armed bandit problems. | [
"bandits",
"information maximization",
"Bayesian inference",
"physics-based approach"
] | https://openreview.net/pdf?id=HmEZsHh589 | FZYmIYzJtf | decision | 1,722,287,920,030 | HmEZsHh589 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
GxdPhNOYKf | Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm | [
"Sattar Vakili",
"Julia Olkhovskaya"
] | Reinforcement learning utilizing kernel ridge regression to predict the expected value function represents a powerful method with great representational capacity. This setting is a highly versatile framework amenable to analytical results. We consider kernel-based function approximation for RL in the infinite horizon average reward setting, also referred to as the undiscounted setting. We propose an \emph{optimistic} algorithm, similar to acquisition function based algorithms in the special case of bandits. We establish novel \emph{no-regret} performance guarantees for our algorithm, under kernel-based modelling assumptions. Additionally, we derive a novel confidence interval for the kernel-based prediction of the expected value function, applicable across various RL problems. | [
"Kernel function approximation",
"reinforcement learning"
] | https://openreview.net/pdf?id=GxdPhNOYKf | gIli6MUX01 | decision | 1,722,287,917,994 | GxdPhNOYKf | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
GdG5tkPqvc | Enhancing Exploration via Off-Reward Dynamic Reference Reinforcement Learning | [
"Yamen Habib",
"Dmytro Grytskyy",
"Rubén Moreno-Bote"
] | In reinforcement learning (RL), balancing exploration and exploitation is essential for maximizing a target reward function. Traditional methods often employ regularizers to prevent the policy from becoming deterministic too early, such as penalizing deviations from a static reference policy. This paper introduces a novel approach by {\em jointly} training an off-reward dynamic reference policy (ORDRP) with the target policy, using a distinct reward function to guide exploration. We employ Kullback–Leibler divergence between the target policy and the dynamic reference policy as a regularization mechanism. Crucially, we provide a formal proof of convergence for the ORDRP iteration method, establishing its theoretical soundness. Our approach is validated within an actor-critic framework, with the ORDRP trained either using the maximum occupancy principle or Laplacian intrinsic off-rewards. Experimental results in challenging environments demonstrate that incorporating a jointly trained ORDRP enhances exploration, resulting in superior performance and higher sample efficiency compared to state-of-the-art baselines. These findings highlight the benefits of learning the reference policy alongside the main policy, leading to improved learning outcomes. Project page: https://yamenhabib.com/ORDRP/ | [
"Off-Reward Dynamic Reference Policy (ORDRP)",
"Dynamic Reference",
"KL Regularization",
"graph Laplacian",
"Maximum Occupancy Principle",
"Actor Critic"
] | https://openreview.net/pdf?id=GdG5tkPqvc | 2kTY64Q8FQ | decision | 1,722,287,921,232 | GdG5tkPqvc | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
FDV3SpjiPA | State Abstraction Discovery from Progressive Disaggregation Methods | [
"Orso Forghieri",
"Hind Castel",
"Emmanuel Hyon",
"Erwan Le Pennec"
] | The high dimensionality of model-based Reinforcement Learning (RL) and Markov Decision Processes (MDPs) can be reduced using abstractions of the state and action spaces. Although hierarchical learning and state abstraction methods have been explored over the past decades, explicit methods to build useful abstractions of models are rarely provided. In this work, we study the relationship between Approximate Dynamic Programming (ADP) and State Abstraction. We provide an estimation of the approximation made through abstraction, which can be explicitly calculated. We also introduce a way to solve large MDPs through an abstraction refinement process that can be applied to both discounted and total reward criteria. This method allows finding explicit state abstractions while solving any MDP with controlled error. We then integrate this state space disaggregation process into classical Dynamic Programming algorithms, namely Approximate Value Iteration, Q-Value Iteration, and Policy Iteration. We show that this method can decrease the solving time of a wide range of models and can also describe the underlying dynamics of the MDP without making any assumptions about the structure of the problem. We also conduct an extensive numerical comparison and compare our approach to existing aggregation methods to support our claims. | [
"State Abstraction",
"Model-based",
"Reinforcement Learning",
"Dynamic Programming"
] | https://openreview.net/pdf?id=FDV3SpjiPA | oBmvcWcZXk | decision | 1,722,287,917,578 | FDV3SpjiPA | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
EruoSfUPxU | Divide and Conquer: Provably Unveiling the Pareto Front with Multi-Objective Reinforcement Learning | [
"Willem Röpke",
"Mathieu Reymond",
"Patrick Mannion",
"Diederik M Roijers",
"Ann Nowe",
"Roxana Rădulescu"
] | A notable challenge in multi-objective reinforcement learning is obtaining a Pareto front of policies to attain optimal performance under different preferences. We introduce Iterated Pareto Referent Optimisation (IPRO), which decomposes finding the Pareto front into a sequence of constrained single-objective problems. This enables us to guarantee convergence while providing an upper bound on the distance to undiscovered Pareto optimal solutions at each step. Empirical evaluations demonstrate that IPRO matches or outperforms methods that require additional assumptions. Furthermore, IPRO is a general-purpose multi-objective optimisation method, making it applicable to domains beyond reinforcement learning. | [
"Reinforcement learning",
"Multi-objective",
"Pareto front"
] | https://openreview.net/pdf?id=EruoSfUPxU | jjkEZTeWWY | decision | 1,722,287,916,738 | EruoSfUPxU | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
EGLZgZ8Gme | Almost Sure Convergence of Stochastic Gradient Methods under Gradient Domination | [
"Simon Weissmann",
"Sara Klein",
"Waïss Azizian",
"Leif Döring"
] | Stochastic gradient methods are among the most important algorithms in training machine learning problems. While classical assumptions such as strong convexity allow a simple analysis they are rarely satisfied in applications. In recent years, global and local gradient domination properties have shown to be a more realistic replacement of strong convexity. They were proved to hold in diverse settings such as (simple) policy gradient methods in reinforcement learning and training of deep neural networks with analytic activation functions. We prove almost sure convergence rates $f(X_n)-f^*\in o\big( n^{-\frac{1}{4\beta-1}+\epsilon}\big)$ of the last iterate for stochastic gradient descent (with and without momentum) under global and local $\beta$-gradient domination assumptions. The almost sure rates get arbitrarily close to recent rates in expectation. Finally, we demonstrate how to apply our results to the training task in both supervised and reinforcement learning. | [
"stochastic gradient methods",
"almost sure convergence",
"gradient domination",
"reinforcement learning",
"neural networks"
] | https://openreview.net/pdf?id=EGLZgZ8Gme | ovpKuXwM5h | decision | 1,722,287,920,361 | EGLZgZ8Gme | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: As noted by reviewer Sekc, this interesting work might be an outlier at EWRL. We emphasize the recommendation to better highlight the connection to RL |
CwyZTxSGWA | Periodic agent-state based Q-learning for POMDPs | [
"Amit Sinha",
"Matthieu Geist",
"Aditya Mahajan"
] | The standard approach for Partially Observable Markov Decision Processes (POMDPs) is to convert them to a fully observed belief-state MDP. However, the belief state depends on the system model and is therefore not viable in reinforcement learning (RL) settings. A widely used alternative is to use an agent state, which is a model-free, recursively updateable function of the observation history. Examples include frame stacking and recurrent neural networks. Since the agent state is model-free, it is used to adapt standard RL algorithms to POMDPs. However, standard RL algorithms like Q-learning learn a stationary policy. Our main thesis that we illustrate via examples is that because the agent state does not satisfy the Markov property, non-stationary agent-state based policies can outperform stationary ones. To leverage this feature, we propose PASQL (periodic agent-state based Q-learning), which is a variant of agent-state-based Q-learning that learns periodic policies. By combining ideas from periodic Markov chains and stochastic approximation, we rigorously establish that PASQL converges to a cyclic limit and characterize the approximation error of the converged periodic policy. Finally, we present a numerical experiment to highlight the salient features of PASQL and demonstrate the benefit of learning periodic policies over stationary policies. | [
"POMDPs",
"RL",
"Q-learning",
"non-stationary policies",
"non-Markovian environments"
] | https://openreview.net/pdf?id=CwyZTxSGWA | yeHtr7I65t | decision | 1,722,287,920,084 | CwyZTxSGWA | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This appears like a well developed contribution, which could deserve more discussion. The evaluation of reviewer Ddvk is very centered on RDPs and somehow discard the framework of POMDPs. Yet, we encourage the authors to take the formulated remarks into account, both for their final version and for future contributions. |
CnsamJpCol | A Distributional Analogue to the Successor Representation | [
"Harley Wiltzer",
"Jesse Farebrother",
"Arthur Gretton",
"Yunhao Tang",
"Andre Barreto",
"Will Dabney",
"Marc G Bellemare",
"Mark Rowland"
] | This paper contributes a new approach for distributional reinforcement learning which elucidates a clean separation of transition structure and reward in the learning process. Analogous to how the successor representation (SR) describes the expected consequences of behaving according to a given policy, our distributional successor measure (SM) describes the distributional consequences of this behaviour. We formulate the distributional SM as a distribution over distributions and provide theory connecting it with distributional and model-based reinforcement learning. Moreover, we propose an algorithm that learns the distributional SM from data by minimizing a two-level maximum mean discrepancy. Key to our method are a number of algorithmic techniques that are independently valuable for learning generative models of state. As an illustration of the usefulness of the distributional SM, we show that it enables zero-shot risk-sensitive policy evaluation in a way that was not previously possible. | [
"reinforcement learning",
"distributional reinforcement learning",
"risk-aware",
"successor representation",
"successor measure"
] | https://openreview.net/pdf?id=CnsamJpCol | WkZdNpwp7X | decision | 1,722,287,918,916 | CnsamJpCol | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
Bhtigv3Nvr | AFU: Actor-Free critic Updates in off-policy RL for continuous control | [
"Nicolas Perrin-Gilbert"
] | This paper presents AFU, an off-policy deep RL algorithm addressing in a new way the challenging ``max-Q problem'' in Q-learning for continuous action spaces, with a solution based on regression and conditional gradient scaling. AFU has an actor but its critic updates are entirely independent from it. As a consequence, the actor can be chosen freely. In the initial version, AFU-alpha, we employ the same stochastic actor as in Soft Actor-Critic (SAC), but we then study a simple failure mode of SAC and show how AFU can be modified to make actor updates less likely to become trapped in local optima, resulting in a second version of the algorithm, AFU-beta. Experimental results demonstrate the sample efficiency of both versions of AFU, marking it as the first model-free off-policy algorithm competitive with state-of-the-art actor-critic methods while departing from the actor-critic perspective. | [
"Reinforcement learning",
"Q-learning",
"Off-policy RL",
"Model-free RL",
"Continuous control"
] | https://openreview.net/pdf?id=Bhtigv3Nvr | dtvuPuns4e | decision | 1,722,287,917,097 | Bhtigv3Nvr | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
BOvPjZNKf3 | Learning to Explore with Lagrangians for Bandits under Unknown Constraints | [
"Udvas Das",
"Debabrota Basu"
] | Pure exploration in bandits can model eclectic real-world decision making problems, such as tuning hyper-parameters or conducting user studies, where sample frugality is desired. Thus, considering different safety, resource, and fairness constraints on the decision space has gained increasing attention. In this paper, we study generalisation of these problems as pure exploration in multi-armed bandits with unknown linear constraints. First, we propose a Lagrangian relaxation of the sample complexity lower bound for pure exploration. We further derive how this lower bound converges to the existing lower bound for pure exploration under known constraints, and how the hardness of the problem changes with the geometry induced by the constraint estimation procedure. We further leverage the Lagrangian lower bound and properties of convex optimisation to propose two computationally efficient extensions of Track-and-Stop and Gamified Explorations, namely LATS and LAGEX. Designing these algorithms require us to propose a new constraint-adaptive stopping rule, and also at each step, using pessimistic estimates of constraints in the Lagrangian lower bound. We show that these algorithms asymptotically achieve the desired sample complexity bounds. Finally, we conduct numerical experiments with different reward distributions and constraints that validate efficient performance of LAGEX and LATS with respect to baselines. | [
"Pure exploration",
"Unknown linear constraints",
"Bandits",
"Lagrangian optimization"
] | https://openreview.net/pdf?id=BOvPjZNKf3 | KxVRdb6jcU | decision | 1,722,287,919,282 | BOvPjZNKf3 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
Axp2UPvrm6 | Can Decentralized Q-learning learn to collude? | [
"Janusz M Meylahn"
] | The possibility of algorithmic collusion between pricing algorithms and the necessary antitrust legislation to regulate against it are hotly debated among academics and policymakers. However, none of the algorithms shown to collude have theoretical convergence guarantees and no theoretical framework exists for characterizing an algorithm's likelihood to collude. In this article, we summarize recent work which provides tools for quantifying the likelihood of collusion for a provably convergent algorithm and applies the results to two simple pricing environments. | [
"Decentralized multiagent reinforcement learning",
"Algorithmic collusion",
"Best response graphs",
"Basins of attraction"
] | https://openreview.net/pdf?id=Axp2UPvrm6 | ixzt8RK1Xn | decision | 1,722,287,920,808 | Axp2UPvrm6 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
AKU4h6BPG7 | Image-Based Dataset Representations for Predicting Learning Performance in Offline RL | [
"Enrique Mateos-Melero",
"Miguel Iglesias Alcázar",
"Raquel Fuentetaja",
"Peter Stone",
"Fernando Fernández"
] | In this paper, we address the challenge of predicting learning performance in offline Reinforcement Learning (RL). It is a crucial task to ensure the learned policy performs reliably in the real world and to avoid unsafe or costly interactions. We introduce a new approach that utilizes Convolutional Neural Networks (CNNs) to analyze offline RL datasets, represented as images. Our model predicts the performance of policies learned from these datasets within a specific RL framework, including the selected algorithm and hyperparameters. We explore the model's transferability across different scenarios with alterations in state space size or transition functions. Furthermore, we demonstrate an application of our model in optimizing offline RL datasets. Leveraging genetic algorithms, we navigate through potential dataset subsets to identify a reduced version that enhances policy learning efficiency. This optimized dataset reduces training time while achieving comparable or superior performance to the complete dataset. | [
"Offline Reinforcement Learning",
"Dataset Quality",
"Learning Performance Prediction",
"Convolutional Neural Netowrks"
] | https://openreview.net/pdf?id=AKU4h6BPG7 | GLfQi4Ox0d | decision | 1,722,287,916,497 | AKU4h6BPG7 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
9cbQNaKSyS | Model-based Sparse Communication in Multi-agent Reinforcement Learning | [
"Shuai Han",
"Mehdi Dastani",
"Shihan Wang"
] | Learning to communicate efficiently is central to multi-agent reinforcement learning (MARL). Existing methods often require agents to exchange messages intensively, which abuses communication channels and leads to high communication overhead. Only a few methods target on learning sparse communication, but they allow limited information to be shared, which affects the efficiency of policy learning. In this work, we propose model-based communication (MBC), a learning framework with a decentralized communication scheduling process. The MBC framework enables multiple agents to make decisions with sparse communication. In particular, the MBC framework introduces a model-based message estimator to estimate the up-to-date global messages using past local data. A decentralized message scheduling mechanism is also proposed to determine whether a message shall be sent based on the estimation. We evaluated our method in a variety of mixed cooperative-competitive environments. The experiment results show that the MBC method shows better performance and lower channel overhead than the state-of-art baselines. | [
"Multi-agent reinforcement learning",
"communication overhead",
"decentralized communication scheduling"
] | https://openreview.net/pdf?id=9cbQNaKSyS | 8Pgw4INLR1 | decision | 1,722,287,916,605 | 9cbQNaKSyS | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
9JAajeK84e | Applying Reinforcement Learning to Navigation In Partially Observable Flows | [
"Selim Mecanna",
"Aurore Loisy",
"Christophe Eloy"
] | We consider the problem of navigating in a fluid flow while being carried by it,
using only information accessible from on-board sensors. This POMDP (partially
observable Markov decision process) is particularly challenging because to behave
optimally, the agent has to exploit coherent structures that exist in the flow without
observing them directly, and while being subjected to chaotic dynamics. It is yet
commonly faced by autonomous robots deployed in the oceans and drifting with the
flow (for, e.g., environmental monitoring). While some attempts have been made
to use reinforcement learning for navigation in partially observable flows, progress
has been limited by the lack of well-defined benchmarks and baselines for this
application. In this paper, we first introduce a well-posed navigation POMDP for
which a near-optimal policy is known analytically, thereby allowing for a critical
assessment of reinforcement learning methods applied to autonomous navigation
in complex flows. We then evaluate the ’vanilla’ learning algorithms commonly
used in the fluid mechanics community (Advantage Actor Critic, Q-Learning)
and report on their poor performance. Finally, we provide an implementation of
PPO (Proximal Policy Optimization) able to match the theoretical near-optimal
performance. This demonstrates the feasibility of learning autonomous navigation
strategies in complex flows as encountered in the oceans. | [
"Reinforcement Learning",
"POMDP",
"Navigation",
"Fluid Mechanics",
"Turbulence."
] | https://openreview.net/pdf?id=9JAajeK84e | U8e2AAebEm | decision | 1,722,287,920,536 | 9JAajeK84e | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
99rVbG1iJM | The Whys and Hows of Active Exploration in Model-Based Reinforcement Learning | [
"Alberto Caron",
"Chris Hicks",
"Vasilios Mavroudis"
] | In this work, we study the problem of sample efficient exploration in Model-Based Reinforcement Learning (MBRL). While most popular exploration methods in MBRL are "reactive" in nature, and thus inherently sample inefficient, we discuss the benefits of an "active" approach, where the agent selects actions to query novel states in a data-efficient way, provided that one can guarantee that regions of high epistemic, and not aleatoric, uncertainty are targeted. In order to ensure this, we consider popular exploration bonuses based on Bayesian surprise, and demonstrate their desirable properties under the assumption of a Gaussian Process model. We then introduce a novel exploration method, Bayesian Active Exploration, where the agent queries transitions based on a multi-step predictive search aimed at maximizing the expected information gain. Moreover, we propose alternative dynamics model specifications based on stochastic variational Gaussian Processes and deep kernels that allow for better scalability with sample size and state-action spaces, and accommodate non-tabular inputs by learning a latent representation, while maintaining good uncertainty-quantification properties. | [
"Model-Based Reinforcement Learning",
"Active Learning",
"Gaussian Processes"
] | https://openreview.net/pdf?id=99rVbG1iJM | EKJmFO7o6A | decision | 1,722,287,918,743 | 99rVbG1iJM | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
8tQfA8RwjG | Private Online Learning in Adversarial MDPs: Full-Information and Bandit | [
"Shaojie Bai",
"Lanting Zeng",
"Chengcheng Zhao",
"Xiaoming Duan",
"Mohammad Sadegh Talebi",
"Peng Cheng",
"Jiming Chen"
] | We study learning adversarial Markov decision process (MDP) in the episodic setting under the constraint of differential privacy (DP). This is motivated by the widespread applications of reinforcement learning (RL) in non-stationary and even adversarial scenarios, where protecting users' sensitive information is vital. We first propose two efficient frameworks for adversarial MDPs, spanning full-information and bandit settings. Within each framework, we consider both Joint DP (JDP), where a central agent is trusted to protect the sensitive data, and Local DP (LDP), where the information is protected directly on the user side. Then, we design novel privacy mechanisms to privatize the stochastic transition and adversarial losses. By instantiating such privacy mechanisms to satisfy JDP and LDP requirements, we obtain near-optimal regret guarantees for both frameworks. To our knowledge, these are the first algorithms to tackle the challenge of private learning in adversarial MDPs. | [
"Differential Privacy",
"Adversarial Markov Decision Proess",
"Regret Minimization"
] | https://openreview.net/pdf?id=8tQfA8RwjG | To9Qh9LqnF | decision | 1,722,287,919,686 | 8tQfA8RwjG | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
8esP9hxdsM | No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO | [
"Skander Moalla",
"Andrea Miele",
"Razvan Pascanu",
"Caglar Gulcehre"
] | Reinforcement learning (RL) is inherently rife with non-stationarity since the states and rewards the agent observes during training depend on its changing policy.
Therefore, networks in deep RL must be capable of adapting to new observations and fitting new targets.
However, previous works have observed that networks in off-policy deep value-based methods exhibit a decrease in representation rank, often correlated with an inability to continue learning or a collapse in performance.
Although this phenomenon has generally been attributed to neural network learning under non-stationarity, it has been overlooked in on-policy policy optimization methods which are often thought capable of training indefinitely.
In this work, we empirically study representation dynamics in Proximal Policy Optimization (PPO) on the Atari and MuJoCo environments, revealing that PPO agents are also affected by feature rank deterioration and loss of plasticity.
We show that this is aggravated with stronger non-stationarity, ultimately driving the actor's performance to collapse, regardless of the performance of the critic.
We ask why the trust region, specific to methods like PPO, cannot alleviate or prevent the collapse.
We find that there is a connection between representation collapse and the degradation of the trust region, one exacerbating the other, and present Proximal Feature Optimization (PFO), a novel auxiliary loss that, along with other interventions, shows that regularizing the representation dynamics improves the performance of PPO agents. | [
"proximal policy optimization",
"plasticity loss",
"trust region",
"feature rank collapse",
"regularization"
] | https://openreview.net/pdf?id=8esP9hxdsM | ZBDxdZXOF5 | decision | 1,722,287,915,882 | 8esP9hxdsM | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
8RAhqQTGI5 | Tractable Offline Learning of Regular Decision Processes | [
"Ahana Deb",
"Roberto Cipollone",
"Anders Jonsson",
"Alessandro Ronca",
"Mohammad Sadegh Talebi"
] | This work studies offline Reinforcement Learning (RL) in a class of non-Markovian environments called Regular Decision Processes (RDPs). In RDPs, the unknown dependency of future observations and rewards from the past interactions can be captured by some hidden finite-state automaton. For this reason, many RDP algorithms first reconstruct this unknown dependency using automata learning techniques. In this paper, we show that it is possible to overcome two strong limitations of previous offline RL algorithms for RDPs, notably RegORL. This can be accomplished via the introduction of two original techniques: the development of a new pseudometric based on formal languages, which removes a problematic dependency on $L_\infty^\mathsf{p}$ distinguishability parameters, and the adoption of Count-Min-Sketch (CMS), instead of naive counting. The former reduces the number of samples required in environments that are characterized by a low complexity in language-theoretic terms. The latter alleviates the memory requirements for long planning horizons. We derive the PAC sample complexity bounds associated to each of these techniques, and we validate the approach experimentally. | [
"Reinforcement Learning",
"Non-Markov Decision Process",
"Offline Reinforcement Learning",
"Regular Decision Processes",
"Sample Complexity",
"Automata"
] | https://openreview.net/pdf?id=8RAhqQTGI5 | nRD8FO0go2 | decision | 1,722,287,919,656 | 8RAhqQTGI5 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
88zP8xh5D2 | Impact of Collective Behaviors of Autonomous Vehicles on Urban Traffic Dynamics: A Multi-Agent Reinforcement Learning Approach | [
"Ahmet Onur Akman",
"Anastasia Psarou",
"Zoltán György Varga",
"Grzegorz Jamróz",
"Rafal Kucharski"
] | This study examines the potential impact of reinforcement learning (RL)-enabled autonomous vehicles (AV) on urban traffic flow in a mixed traffic environment. We focus on a simplified day-to-day route choice problem in a multi-agent setting. We consider a city network where human drivers travel through their chosen routes to reach their destinations in minimum travel time. Then, we convert one-third of the population into AVs, which are RL agents employing Deep Q-learning algorithm. We define a set of optimization targets, or as we call them behaviors, namely selfish, collaborative, competitive, social, altruistic, and malicious. We impose a selected behavior on AVs through their rewards. We run our simulations using our in-house developed RL framework PARCOUR. Our simulations reveal that AVs optimize their travel times by up to 5\%, with varying impacts on human drivers' travel times depending on the AV behavior. In all cases where AVs adopt a self-serving behavior, they achieve shorter travel times than human drivers. Our findings highlight the complexity differences in learning tasks of each target behavior. We demonstrate that the multi-agent RL setting is applicable for collective routing on traffic networks, though their impact on coexisting parties greatly varies with the behaviors adopted. | [
"MARL",
"Optimal Route Choice",
"Reinforcement Learning",
"Online RL",
"Deep Q Learning",
"DQN"
] | https://openreview.net/pdf?id=88zP8xh5D2 | 6VyB18IIvT | decision | 1,722,287,920,337 | 88zP8xh5D2 | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The contribution is interesting in itself but reviewers noted a number of weaknesses making it a borderline paper. We strongly encourage the authors to take this feedback into account in their final version. |
6Ky1NAToOc | Functional Acceleration for Policy Mirror Descent | [
"Veronica Chelu",
"Doina Precup"
] | We apply functional acceleration to the Policy Mirror Descent (PMD) general family of algorithms, which cover a wide range of novel and fundamental methods in Reinforcement Learning (RL). Leveraging duality, we propose a momentum-based PMD update.
By taking the functional route, our approach is independent of the policy parametrization and applicable to large-scale optimization, covering previous applications of momentum at the level of policy parameters as a special case.
We theoretically analyze several properties of this approach and complement with a numerical ablation study, which serves to illustrate the policy optimization dynamics on the value polytope, relative to different algorithmic design choices in this space. We further characterize numerically several features of the problem setting relevant for functional acceleration, and lastly, we investigate the impact of approximation on their learning mechanics. | [
"functional acceleration",
"Policy Mirror Descent (PMD)",
"momentum",
"extrapolation",
"duality",
"lookahead"
] | https://openreview.net/pdf?id=6Ky1NAToOc | mKqtYhLxjT | decision | 1,722,287,919,056 | 6Ky1NAToOc | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
6ItKUaCzGK | Physics-Informed Model and Hybrid Planning for Efficient Dyna-Style Reinforcement Learning | [
"Zakariae EL ASRI",
"Olivier Sigaud",
"Nicolas THOME"
] | Applying reinforcement learning (RL) to real-world applications requires addressing a trade-off between asymptotic performance, sample efficiency, and inference time. In this work, we demonstrate how to address this triple challenge by leveraging partial physical knowledge about the system dynamics. Our approach involves learning a physics-informed model to boost sample efficiency and generating imaginary trajectories from this model to learn a model-free policy and Q-function. Furthermore, we propose a hybrid planning strategy, combining the learned policy and Q-function with the learned model to enhance time efficiency in planning. Through practical demonstrations, we illustrate that our method improves the compromise between sample efficiency, time efficiency, and performance over state-of-the-art methods. | [
"Reinforcement learning",
"Model-based reinforcement learning",
"Model-based RL and Planning",
"Offline reinforcement learning",
"Physics-informed reinforcement learning",
"Neural ODE"
] | https://openreview.net/pdf?id=6ItKUaCzGK | s3cMpj6DOS | decision | 1,722,287,918,159 | 6ItKUaCzGK | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
6Hp8ymjLjW | Bisimulation Metrics are Optimal Transport Distances, and Can be Computed Efficiently | [
"Sergio Calo",
"Anders Jonsson",
"Gergely Neu",
"Ludovic Schwartz",
"Javier Segovia-Aguas"
] | We propose a new framework for formulating optimal transport distances between
Markov chains. Previously known formulations studied couplings between the
entire joint distribution induced by the chains, and derived solutions via a reduction
to dynamic programming (DP) in an appropriately defined Markov decision process.
This formulation has, however, not led to particularly efficient algorithms so far,
since computing the associated DP operators requires fully solving a static optimal
transport problem, and these operators need to be applied numerous times during
the overall optimization process. In this work, we develop an alternative perspective
by considering couplings between a “flattened” version of the joint distributions
that we call discounted occupancy couplings, and show that calculating optimal
transport distances in the full space of joint distributions can be equivalently
formulated as solving a linear program (LP) in this reduced space. This LP
formulation allows us to port several algorithmic ideas from other areas of optimal
transport theory. In particular, our formulation makes it possible to introduce an
appropriate notion of entropy regularization into the optimization problem, which
in turn enables us to directly calculate optimal transport distances via a Sinkhorn-
like method we call Sinkhorn Value Iteration (SVI). We show both theoretically and
empirically that this method converges quickly to an optimal coupling, essentially
at the same computational cost of running vanilla Sinkhorn in each pair of states.
Along the way, we point out that our optimal transport distance exactly matches
the common notion of bisimulation metrics between Markov chains, and thus our
results also apply to computing such metrics, and in fact our algorithm turns out to
be significantly more efficient than the best known methods developed so far for
this purpose. | [
"optimal transport",
"Markov chains",
"bisimulation"
] | https://openreview.net/pdf?id=6Hp8ymjLjW | uywTEsomf4 | decision | 1,722,287,917,688 | 6Hp8ymjLjW | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
5ok1UBnMKh | Cyclicity-Regularized Coordination Graphs | [
"Oliver Järnefelt",
"Mahdi Kallel",
"Carlo D'Eramo"
] | In parallel with the rise of the successful value function factorization approach, numerous recent studies on Cooperative Multi-Agent Reinforcement Learning (MARL) have explored the application of Coordination Graphs (CG) to model the communication requirements among the agent population. These coordination problems often exhibit structural sparsity, which facilitates accurate joint value function learning with CGs. Value-based methods necessitate the computation of argmaxes over the exponentially large joint action space, leading to the adoption of the max-sum method from the distributed constraint optimization (DCOP) literature. However, it has been empirically observed that the performance of max-sum deteriorates with an increase in the number of agents, attributed to the increased cyclicity of the graph. While previous works have tackled this issue by sparsifying the graph based on a metric of edge importance, thereby demonstrating improved performance, we argue that neglecting topological considerations during the sparsification procedure can adversely affect action selection. Consequently, we advocate for the explicit consideration of graph cyclicity alongside edge importances. We demonstrate that this approach results in superior performance across various challenging coordination problems. | [
"Multi-agent reinforcement learning",
"Sparse coordination graphs",
"Graph cyclicity"
] | https://openreview.net/pdf?id=5ok1UBnMKh | 2fnwsTHdSO | decision | 1,722,287,917,155 | 5ok1UBnMKh | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
56VV0qxgyI | Generalized Nested Rollout Policy Adaptation with Limited Repetitions | [
"Tristan Cazenave"
] | Generalized Nested Rollout Policy Adaptation (GNRPA) is a Monte Carlo search algorithm for optimizing a sequence of choices. We propose to improve on GNRPA by avoiding too deterministic policies that find again and again the same sequence of choices. We do so by limiting the number of repetitions of the best sequence found at a given level. Experiments show that it improves the algorithm for three
different combinatorial problems: Inverse RNA Folding, the Traveling Salesman Problem with Time Windows and the Weak Schur problem. | [
"Monte Carlo Search"
] | https://openreview.net/pdf?id=56VV0qxgyI | 4NSNnvgsYx | decision | 1,722,287,919,931 | 56VV0qxgyI | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: Although the contribution does not seem a major leap forward, it is a correct and interesting contribution. We encourage the authors to take all reviewers' comments in preparation of their final version. |
51XSWH0mgN | Inferring Behavior-Specific Context Improves Zero-Shot Generalization in Reinforcement Learning | [
"Tidiane CAMARET NDIR",
"André Biedenkapp",
"Noor Awad"
] | In zero-shot generalization (ZSG) in Reinforcement Learning (RL) agents must adapt to entirely novel environments without additional training. Understanding and utilizing contextual cues, such as the gravity level of the environment, is critical for robust generalization.
We propose to integrate the learning of context representations directly with policy learning. Our algorithm demonstrates improved generalization on various simulated domains, outperforming prior context-learning techniques in zero-shot settings. By jointly learning policy and context, our method develops behavior-specific context representations, enabling adaptation to unseen environments. This approach marks significant progress toward reinforcement learning systems that can generalize across diverse real-world tasks. | [
"Zero-shot generalization",
"Contextual reinforcement learning",
"Meta-learning"
] | https://openreview.net/pdf?id=51XSWH0mgN | MmCBHBUr89 | decision | 1,722,287,917,023 | 51XSWH0mgN | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
4uUFCvuqau | Adaptive $Q$-Network: On-the-fly Target Selection for Deep Reinforcement Learning | [
"Théo Vincent",
"Fabian Wahren",
"Jan Peters",
"Boris Belousov",
"Carlo D'Eramo"
] | Deep Reinforcement Learning (RL) is well known for being highly sensitive to hyperparameters, requiring practitioners substantial efforts to optimize them for the problem at hand. In recent years, the field of automated Reinforcement Learning (AutoRL) has grown in popularity by trying to address this issue. However, these approaches typically hinge on additional samples to select well-performing hyperparameters, hindering sample-efficiency and practicality in RL. Furthermore, most AutoRL methods are heavily based on already existing AutoML methods, which were originally developed neglecting the additional challenges inherent to RL due to its non-stationarities. In this work, we propose a new approach for AutoRL, called Adaptive $Q$-Network (AdaQN), that is tailored to RL to take into account the non-stationarity of the optimization procedure without requiring additional samples. AdaQN learns several $Q$-functions, each one trained with different hyperparameters, which are updated online using the $Q$-function with the smallest approximation error as a shared target. Our selection scheme simultaneously handles different hyperparameters while coping with the non-stationarity induced by the RL optimization procedure and being orthogonal to any critic-based RL algorithm. We demonstrate that AdaQN is theoretically sound and empirically validate it in MuJoCo control problems, showing benefits in sample-efficiency, overall performance, training stability, and robustness to stochasticity. | [
"automated reinforcement learning",
"deep reinforcement learning",
"hyperparameter selection"
] | https://openreview.net/pdf?id=4uUFCvuqau | R4EfdAsC3E | decision | 1,722,287,915,934 | 4uUFCvuqau | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
4osMPTZFBF | Maximum Entropy On-Policy Actor-Critic via Entropy Advantage Estimation | [
"Jean Seong Bjorn Choe",
"Jong-Kook Kim"
] | Entropy Regularisation is a widely adopted technique that enhances policy optimisation performance and stability. A notable form of entropy regularisation is augmenting the objective with an entropy term, thereby simultaneously optimising the expected return and the entropy. This framework, known as maximum entropy reinforcement learning (MaxEnt RL), has shown theoretical and empirical successes. However, its practical application in straightforward on-policy actor-critic settings remains surprisingly underexplored. We hypothesise that this is due to the difficulty of managing the entropy reward in practice. This paper proposes a simple method of separating the entropy objective from the MaxEnt RL objective, which facilitates the implementation of MaxEnt RL in on-policy settings. Our empirical evaluations demonstrate that extending Proximal Policy Optimisation (PPO) and Trust Region Policy Optimisation (TRPO) within the MaxEnt framework improves policy optimisation performance in both MuJoCo and Procgen tasks. Additionally, our results highlight MaxEnt RL's capacity to enhance generalisation. | [
"policy gradient",
"entropy regularization",
"maximum entropy reinforcement learning",
"actor-critic"
] | https://openreview.net/pdf?id=4osMPTZFBF | RgXWRftD4U | decision | 1,722,287,918,297 | 4osMPTZFBF | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
4H1sAt9w4J | Countering Reward Over-optimization in LLM with Demonstration-Guided Reinforcement Learning | [
"Mathieu Rita",
"Florian Strub",
"Rahma Chaabouni",
"Paul Michel",
"Emmanuel Dupoux",
"Olivier Pietquin"
] | While reinforcement learning (RL) has been proven essential for tuning large language models (LLMs), it can lead to reward over-optimization (ROO). Existing approaches address ROO by adding KL regularization, requiring computationally expensive hyperparameter tuning. Additionally, KL regularization focuses solely on regularizing the language policy, neglecting a potential source of regularization: the reward function itself. Inspired by demonstration-guided RL, we here introduce the Reward Calibration from Demonstration (RCfD), which leverages human demonstrations and a reward model to recalibrate the reward objective. Formally, given a prompt, the RCfD objective minimizes the distance between the demonstrations' and LLM's rewards rather than directly maximizing the reward function. This objective shift avoids incentivizing the LLM to exploit the reward model and promotes more natural and diverse language generation.
We show the effectiveness of RCfD on three language tasks, which achieves comparable performance to carefully tuned baselines while mitigating ROO. | [
"RL with demonstation",
"LLM",
"RLHF"
] | https://openreview.net/pdf?id=4H1sAt9w4J | U0fCE2Bx5I | decision | 1,722,287,919,803 | 4H1sAt9w4J | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
4AjvgisPzp | Imitation Learning in Discounted Linear MDP without exploration assumptions | [
"Luca Viano",
"Stratis Skoulakis",
"Volkan Cevher"
] | We present a new algorithm for imitation learning in infinite horizon linear MDPs dubbed ILARL which greatly improves the bound on the number of trajectories that the learner needs to sample from the environment.
In particular, we remove exploration assumptions required in previous works and we improve the dependence on the desired accuracy $\epsilon$ from $\mathcal{O}\br{\epsilon^{-5}}$ to $\mathcal{O}\br{\epsilon^{-4}}$.
Our result relies on a connection between imitation learning and online learning in MDPs with adversarial losses. For the latter setting, we present the first result for infinite horizon linear MDP which may be of independent interest. Moreover, we are able to provide a strengthen result for the finite horizon case where we achieve $\mathcal{O}\br{\epsilon^{-2}}$. Numerical experiments with linear function approximation shows that ILARL outperforms other commonly used algorithms. | [
"imitation learning theory"
] | https://openreview.net/pdf?id=4AjvgisPzp | zOvTGXflAz | decision | 1,722,287,916,986 | 4AjvgisPzp | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
3afe0i8YxX | Zero-shot cross-modal transfer of Reinforcement Learning policies through a Global Workspace | [
"Léopold Maytié",
"Benjamin Devillers",
"Alexandre Arnold",
"Rufin VanRullen"
] | Humans perceive the world through multiple senses, enabling them to create a comprehensive representation of their surroundings and to generalize information across domains. For instance, when a textual description of a scene is given, humans can mentally visualize it. In fields like robotics and Reinforcement Learning (RL), agents can also access information about the environment through multiple sensors; yet redundancy and complementarity between sensors is difficult to exploit as a source of robustness (e.g. against sensor failure) or generalization (e.g. transfer across domains). Prior research demonstrated that a robust and flexible multimodal representation can be efficiently constructed based on the cognitive science notion of a 'Global Workspace': a unique representation trained to combine information across modalities, and to broadcast its signal back to each modality. Here, we explore whether such a brain-inspired multimodal representation could be advantageous for RL agents. First, we train a 'Global Workspace' to exploit information collected about the environment via two input modalities (a visual input, or an attribute vector representing the state of the agent and/or its environment). Then, we train a RL agent policy using this frozen Global Workspace. In two distinct environments and tasks, our results reveal the model's ability to perform zero-shot cross-modal transfer between input modalities, i.e. to apply to image inputs a policy previously trained on attribute vectors (and vice-versa), without additional training or fine-tuning. Variants and ablations of the full Global Workspace (including a CLIP-like multimodal representation trained via contrastive learning) did not display the same generalization abilities. | [
"Global Workspace",
"Multimodal Representation Learning",
"Cross-modal Policy Transfer",
"Cognitive Science"
] | https://openreview.net/pdf?id=3afe0i8YxX | regWkT0IZS | decision | 1,722,287,917,814 | 3afe0i8YxX | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
2Wl4aFxIcO | Sum-Max Submodular Bandits | [
"Stephen Pasteris",
"Alberto Rumi",
"Fabio Vitale",
"Nicolò Cesa-Bianchi"
] | Many online decision-making problems correspond to maximizing a sequence of submodular functions. In this work, we introduce sum-max functions, a subclass of monotone submodular functions capturing several interesting problems, including best-of-$K$-bandits, combinatorial bandits, and the bandit versions on $M$-medians and hitting sets. We show that all functions in this class satisfy a key property that we call pseudo-concavity. This allows us to prove $\big(1 - \frac{1}{e}\big)$-regret bounds for bandit feedback in the nonstochastic setting of the order of $\sqrt{MKT}$ (ignoring log factors), where $T$ is the time horizon and $M$ is a cardinality constraint. This bound, attained by a simple and efficient algorithm, significantly improves on the $\widetilde{\mathcal{O}}\big(T^{2/3}\big)$ regret bound for online monotone submodular maximization with bandit feedback. We also extend our results to a bandit version of the facility location problem. | [
"Multi armed bandits",
"Online decision-making"
] | https://openreview.net/pdf?id=2Wl4aFxIcO | pyBosVcjGI | decision | 1,722,287,918,157 | 2Wl4aFxIcO | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
2Ne6Fb4BgW | Finding good policies in average-reward Markov Decision Processes without prior knowledge | [
"Adrienne Tuynman",
"Emilie Kaufmann",
"Rémy Degenne"
] | We revisit the identification of an $\varepsilon$-optimal policy in average-reward Markov Decision Processes (MDP). In such MDPs, two measures of complexity have appeared in the literature: the diameter, $D$, and the optimal bias span, $H$, which satisfy $H\leq D$. Prior work have studied the complexity of $\varepsilon$-optimal policy identification only when a generative model is available. In this case, it is known that there exists an MDP with $D \simeq H$ for which the sample complexity to output an $\varepsilon$-optimal policy is $\Omega(SAD/\varepsilon^2)$ where $S$ and $A$ are the sizes of the state and action spaces. Recently, an algorithm with a sample complexity of order $SAH/\varepsilon^2$ has been proposed, but it requires the knowledge of $H$. We first show that the sample complexity required to estimate $H$ is not bounded by any function of $S,A$ and $H$, ruling out the possibility to easily make the previous algorithm agnostic to $H$. By relying instead on a diameter estimation procedure, we propose the first algorithm for $(\varepsilon,\delta)$-PAC policy identification that does not need any form of prior knowledge on the MDP. Its sample complexity scales in $SAD/\varepsilon^2$ in the regime of small $\varepsilon$, which is near-optimal. In the online setting, our first contribution is a lower bound which implies that a sample complexity polynomial in $H$ cannot be achieved in this setting. Then, we propose an online algorithm with a sample complexity in $SAD^2/\varepsilon^2$, as well as a novel approach based on a data-dependent stopping rule that we believe is promising to further reduce this bound. | [
"sample complexity",
"Markov decision process",
"best policy identification",
"average reward"
] | https://openreview.net/pdf?id=2Ne6Fb4BgW | 6nh1Ajzy3p | decision | 1,722,287,917,910 | 2Ne6Fb4BgW | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
1Jl0PErLDe | Online learning in CMDPs with adversarial losses and stochastic hard constraints | [
"Francesco Emanuele Stradi",
"Matteo Castiglioni",
"Alberto Marchesi",
"Nicola Gatti"
] | We study online learning in constrained Markov decision processes (CMDPs) with adversarial losses and stochastic hard constraints, under bandit feedback. We consider two different scenarios. In the first one, we address general CMDPs, where we design an algorithm attaining sublinear regret and cumulative positive constraints violation. In the second scenario, under the mild assumption that a policy strictly satisfying the constraints exists and is known to the learner, we design an algorithm that achieves sublinear regret while ensuring that constraints are satisfied at every episode with high probability. To the best of our knowledge, our work is the first to study CMDPs involving both adversarial losses and hard constraints. Indeed, previous works either focus on much weaker soft constraints---allowing for positive violation to cancel out negative ones---or are restricted to stochastic losses. Thus, our algorithms can deal with general non-stationary environments subject to requirements much stricter than those manageable with state-of-the-art ones. This enables their adoption in a much wider range of real-world applications, ranging from autonomous driving to online advertising and recommender systems. | [
"Online Learning",
"CMDPs",
"Hard Constraints"
] | https://openreview.net/pdf?id=1Jl0PErLDe | TOSyMMPwC0 | decision | 1,722,287,916,048 | 1Jl0PErLDe | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
0xfGFx3Zkc | Exploration by Learning Diverse Skills through Successor State Measures | [
"Paul-Antoine LE TOLGUENEC",
"Yann Besse",
"Florent Teichteil-Königsbuch",
"Dennis George Wilson",
"Emmanuel Rachelson"
] | The ability to perform different skills can encourage agents to explore. In this work, we aim to construct a set of diverse skills that uniformly cover the state space. We propose a formalization of this search for diverse skills, building on a previous definition based on the mutual information between states and skills. We consider the distribution of states reached by a policy conditioned on each skill and leverage the successor state measure to maximize the difference between these skill distributions. We call this approach LEADS: Learning Diverse Skills through Successor State Measures. We demonstrate our approach on a set of maze navigation and robotic control tasks which show that our method is capable of constructing a diverse set of skills which exhaustively cover the state space without relying on reward or exploration bonuses. Our findings demonstrate that this new formalization promotes more robust and efficient exploration by combining mutual information maximization and exploration bonuses. | [
"reinforcement learning",
"exploration",
"diversity"
] | https://openreview.net/pdf?id=0xfGFx3Zkc | XpYQYIbJek | decision | 1,722,287,917,615 | 0xfGFx3Zkc | [
"everyone"
] | [
"EWRL/2024/Workshop/Program_Chairs"
] | title: Paper Decision
decision: Accept |
zhCBrgaQZ0 | Resolving Discrepancies in Compute-Optimal Scaling of Language Models | [
"Tomer Porian",
"Mitchell Wortsman",
"Jenia Jitsev",
"Ludwig Schmidt",
"Yair Carmon"
] | Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two datasets (OpenWebText2 and RefinedWeb) and identifying three factors causing the difference: last layer computational cost, warmup duration, and scale-dependent optimizer tuning. With these factors corrected, we obtain excellent agreement with the Hoffmann et al. (i.e., "Chinchilla") scaling law. Counter to a hypothesis of Hoffmann et al., we find that careful learning rate decay is not essential for the validity of their scaling law. As a secondary result, we derive scaling laws for the optimal learning rate and batch size, finding that tuning the AdamW $\beta_2$ parameter is essential at lower batch sizes. | [
"LLMs",
"scaling laws",
"Chinchilla",
"copmute-optimal",
"power laws"
] | https://openreview.net/pdf?id=zhCBrgaQZ0 | uIACHeuSfM | decision | 1,718,650,382,514 | zhCBrgaQZ0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Oral)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
zhCBrgaQZ0 | Resolving Discrepancies in Compute-Optimal Scaling of Language Models | [
"Tomer Porian",
"Mitchell Wortsman",
"Jenia Jitsev",
"Ludwig Schmidt",
"Yair Carmon"
] | Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two datasets (OpenWebText2 and RefinedWeb) and identifying three factors causing the difference: last layer computational cost, warmup duration, and scale-dependent optimizer tuning. With these factors corrected, we obtain excellent agreement with the Hoffmann et al. (i.e., "Chinchilla") scaling law. Counter to a hypothesis of Hoffmann et al., we find that careful learning rate decay is not essential for the validity of their scaling law. As a secondary result, we derive scaling laws for the optimal learning rate and batch size, finding that tuning the AdamW $\beta_2$ parameter is essential at lower batch sizes. | [
"LLMs",
"scaling laws",
"Chinchilla",
"copmute-optimal",
"power laws"
] | https://openreview.net/pdf?id=zhCBrgaQZ0 | U3svqrHMlq | official_review | 1,718,177,987,004 | zhCBrgaQZ0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission17/Reviewer_E9ap"
] | title: Are there any findings in practice against the existing laws?
summary: Overall Rating: borderline reject
strengths: This paper proposes a new scaling law for the optimal
model size as a function of the compute budget C.
The proposed scaling law first disagreed the
Kaplan et al. [2020] and then, point out the law proposed by Hoffmann et al. [2022]
did not correctly address the role of learning rate decay. To explain this,
the proposed law pointed out factors that lead to discrepancies, last layer
computational cost, warmup duration, and scale-dependent
optimizer tuning.
S1: The paper studied a very hotspot problem, the scaling law is crucial for planning resources and infrastructure set up for building LLMs.
S2: It is novel that these three factors can be important in describing the relationship between budget and model and token size.
weaknesses: W1: The paper is not well motivated. Kaplan et al. [2020] and Hoffmann et al. [2022]
are indeed influential. What is the motivation for rethinking existing scaling laws?
Are there any findings in practice against the existing laws?
W2: Some details need to be added.
1. For equation 1, how do the authors come up with that assumption.
Please add the citation or the formula to get this.
W3: The experiments need to be revised. The datasets tested for the proposed law
are not convincing. The most important intuition of the paper is to correct Hoffmann et al. [2022]. However, the experiments did not test MassiveText dataset and evaluate it the law.
Why do all experiments for testing 3 factor on dataset
OpenWebText2 and RefinedWeb. Besides , are the webtext2 and openwebtext2 the same ?
Please clarify it.
confidence: 3 |
zhCBrgaQZ0 | Resolving Discrepancies in Compute-Optimal Scaling of Language Models | [
"Tomer Porian",
"Mitchell Wortsman",
"Jenia Jitsev",
"Ludwig Schmidt",
"Yair Carmon"
] | Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two datasets (OpenWebText2 and RefinedWeb) and identifying three factors causing the difference: last layer computational cost, warmup duration, and scale-dependent optimizer tuning. With these factors corrected, we obtain excellent agreement with the Hoffmann et al. (i.e., "Chinchilla") scaling law. Counter to a hypothesis of Hoffmann et al., we find that careful learning rate decay is not essential for the validity of their scaling law. As a secondary result, we derive scaling laws for the optimal learning rate and batch size, finding that tuning the AdamW $\beta_2$ parameter is essential at lower batch sizes. | [
"LLMs",
"scaling laws",
"Chinchilla",
"copmute-optimal",
"power laws"
] | https://openreview.net/pdf?id=zhCBrgaQZ0 | S1eMDMsVNX | official_review | 1,718,158,144,705 | zhCBrgaQZ0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission17/Reviewer_pKYF"
] | title: Official Review of Submission17
summary: The paper tackles scaling laws, particularly the discrepancies seen between the Kaplan et al. laws from 2020 which focussed on scaling to larger model sizes over data regimes and Hoffman et al. laws from 2022, which focussed on optimal scaling of both model and data sizes. The authors make note of some discrepancies in the original Kaplan laws, which lead to the original recommendations, find fixes for those and then suggest how each fix takes them close of the suggested Hoffman laws. They provide detailed experimentation results and sweeps over different datasets, model sizes and other hyper-parameters.
Based on the strenghts listed below and the overall coherence of the writing and results presentation, I recommend an accept for the paper.
strengths: 1. the paper follows a methodical approach to solving the discrepancies between two popular scaling laws. They first reproduce the Kaplan laws, which are inline with the original paper.
2. The proposed methods for fixing the laws also account for the changes in the parameter counts (which play an important role in the compute C term).
3. The paper has many detailed ablations for each of the proposed fixes, verifying their hypotheses for the differences.
4. Finally, they also scale their experiments across a range of different model sizes (5M - 901M), addressing one of the fundamental issues raised in the Hoffman paper about the original Kaplan laws being derived from a majority of their runs on models < 100M.
5. The paper has a very detailed related work section, setting the context for their different studies / ablations.
6. There are some interesting findings in how to predict batch size and learning rate across model scales based on scaling ablations.
weaknesses: Please note that these are not weaknesses per se, but some questions that come to mind reading through the paper:
1. For the parameter count differences, if we look at architectural differences, since the Chinchilla models follow the original Gopher architecture, they use Transformer XL for position embeddings - which adds overheads in the parameter count due to trainable vectors. Do the authors have an intuition for how to account for this difference, beyond including the last linear layer (projection)?
2. While the findings around tuning AdamW constants is interesting, there is also a question about whether constant learning rates are practical in large scale training runs and also potential diminishing returns from hyper-tweaking each aspect of the optimization process? Have the authors considered any other approach to close the gap between their cosine decay to optimizer tuned runs?
3. Given that the authors use methods which stabilize training runs (such as z_loss, qk_norm, decoupled weight_decay), is there are any shift they observe in the optimal training parameters, especially in their optimizer tuning? How would these parameters change if one doesn't use something like z_loss or qk_norm.
confidence: 4
limitations: Please note that these are not limitations per se, but some questions / thoughts that came to mind reading through the paper:
1. one of the biggest challenges with finding good scaling laws is testing across different order of magnitudes (OOMs) of scale. For example, Hoffman laws experimented across 3 OOMs of model scales (50M ~ 16B params). I get the compute limitations affect the scale of experiments that are possible - can the authors comment on any potential adaptations in their methodology based on testing for 1 additional OOM of model scale?
2. Recent attempts at replicating Hoffman et al. [1], show that the laws may have been suboptimal for their fit. Given that this is based on figure extractions and not the exact data points, it may be possible that the replication may have been suboptimal themselves. But do the authors have any intuitions for what could be missing here between tuning for the original Hoffman laws vs the newly fit versions. Would the authors expect to adapt their methodology based on this?
[1] Chinchilla Scaling: A replication attempt (https://arxiv.org/abs/2404.10102)
suggestions: 1. One thing to clarify up front for readers is to clarify which definition of N is being used for the FLOPs computation (N_eff vs N in Table 2) and then point to the reasoning in the Appendix for further details.
2. It might be good to clarify the reason for choosing the upper bound on the model size (~900M) due to token limit of OpenWebText2 (30B) for scaling with Chinchilla (method 2) - any significantly larger model sizes will lead to potential repeating of the data, which would change the laws altogether? |
zhCBrgaQZ0 | Resolving Discrepancies in Compute-Optimal Scaling of Language Models | [
"Tomer Porian",
"Mitchell Wortsman",
"Jenia Jitsev",
"Ludwig Schmidt",
"Yair Carmon"
] | Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two datasets (OpenWebText2 and RefinedWeb) and identifying three factors causing the difference: last layer computational cost, warmup duration, and scale-dependent optimizer tuning. With these factors corrected, we obtain excellent agreement with the Hoffmann et al. (i.e., "Chinchilla") scaling law. Counter to a hypothesis of Hoffmann et al., we find that careful learning rate decay is not essential for the validity of their scaling law. As a secondary result, we derive scaling laws for the optimal learning rate and batch size, finding that tuning the AdamW $\beta_2$ parameter is essential at lower batch sizes. | [
"LLMs",
"scaling laws",
"Chinchilla",
"copmute-optimal",
"power laws"
] | https://openreview.net/pdf?id=zhCBrgaQZ0 | PVs40PeN53 | official_review | 1,718,050,652,588 | zhCBrgaQZ0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission17/Reviewer_D4Y8"
] | title: Review: "Resolving Discrepancies in Compute-Optimal Scaling of Language Models"
summary: This paper addresses the observed discrepancy between the scaling laws for optimal language model size, as proposed by Kaplan et al. and Hoffmann et al. The authors identify and provide corrections to 3 key factors: last layer computational cost, warmup duration and optimizer tuning, to which they point to as major contributing factors to differences in predictions between the two scaling laws. By reproducing work of Kaplan et al. on OpenLM and RefinedWeb datasets and adjusting the mentioned factors, they achieve alignment with the Hoffman et al. scaling law. Authors, contrary to Hoffmann et al.'s hypothesis, find that the learning rate decay is not as essential as it was thought for the validity of the scaling law. Additionally, this work derives scaling laws for the optimal learning rate and batch size and highlight the necessity of tuning the AdamW B2 parameter at lower batch sizes.
strengths: In this work, authors successfully reproduce the Kaplan et al.'s scaling law, providing a basis for further analysis and comparison with Hoffmann et al. This basis helped to identify and correct three key factors of the scaling laws—last layer computational cost, warmup duration, and optimizer tuning. These findings are well-motivated and experimentally validated.
The paper provides new insights into the role of learning rate decay, showing it is not essential for scaling law validity, contrary to previous hypotheses. Hoffmann et al. hypothesized that the discrepancies between their scaling law and Kaplan et al.'s were due to differences in how the learning rate decay was applied during training. However, in this paper, the authors find that after correcting for other factors—such as last layer computational cost, warmup duration, and optimizer tuning—there is little effect from matching the learning rate decay to the token budget on the compute-optimal scaling law.
Promise to release code and data make a strong case for reproducibility of the results and insights presented in this paper.
Authors provide a comprehensive overview of the limitations. It includes analysis of each limitation and provides theoretical explanation and plans for future work.
weaknesses: In this work certain assumptions are made, such as the fixed compute budget and specific dataset choices, which may limit the applicability of the results to other contexts. However, authors provide comprehensive list of weaknesses in the Appendix A Limitations, which at least partially addresses these issues.
confidence: 3 |
zhCBrgaQZ0 | Resolving Discrepancies in Compute-Optimal Scaling of Language Models | [
"Tomer Porian",
"Mitchell Wortsman",
"Jenia Jitsev",
"Ludwig Schmidt",
"Yair Carmon"
] | Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two datasets (OpenWebText2 and RefinedWeb) and identifying three factors causing the difference: last layer computational cost, warmup duration, and scale-dependent optimizer tuning. With these factors corrected, we obtain excellent agreement with the Hoffmann et al. (i.e., "Chinchilla") scaling law. Counter to a hypothesis of Hoffmann et al., we find that careful learning rate decay is not essential for the validity of their scaling law. As a secondary result, we derive scaling laws for the optimal learning rate and batch size, finding that tuning the AdamW $\beta_2$ parameter is essential at lower batch sizes. | [
"LLMs",
"scaling laws",
"Chinchilla",
"copmute-optimal",
"power laws"
] | https://openreview.net/pdf?id=zhCBrgaQZ0 | 7Li4jLhpdZ | meta_review | 1,718,434,695,188 | zhCBrgaQZ0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission17/Area_Chair_Uz6r"
] | metareview: This manuscript revisits the scaling law discrepancy of Kaplan et al. and Hoffmann et al., and identifies three factors causing the difference. The study does provide some insights into this field. Please use the reviewers' comments to further improve the manuscript.
recommendation: Accept (Poster)
confidence: 3 |
xHQeok6ZWX | Optimistic Asynchrony Control: Achieving Synchronous Convergence With Asynchronous Throughput for Embedding Model Training | [
"Roger Waleffe",
"Jason Mohoney"
] | Modern embedding-based machine learning (ML) models can contain hundreds of gigabytes of parameters, often exceeding the capacity of GPU hardware accelerators critical for training. One solution is to use a mixed CPU-GPU setup, where embedding parameters are stored in CPU memory and subsets are repeatedly transferred to the GPU for computation. In this setup two training paradigms exist: synchronous training and asynchronous training. In the former, batches are transferred one by one, leading to low throughput but fast model convergence. In contrast, during asynchronous training batches are transferred in parallel, allowing for more batches to be processed per unit time. Asynchronous training, however, can effect model quality due to concurrent batches which access the same model parameters leading to stale updates. In this work, we present Optimistic Asynchrony Control, a method for allowing asynchronous batch processing while ensuring model equivalence to a synchronous training execution. Our method is inspired by Optimistic Concurrency Control used in database systems. The main idea is to allow parallel processing and transfer of batches from the CPU to the GPU, but to validate each batch on the GPU before the model is updated to ensure that it has the correct values---the values it would have had if batches were processed and transferred one by one. We show that OAC achieves the best of both worlds, retaining the convergence of synchronous training while matching the throughput of asynchronous ML. This allows OAC to achieve the best time-to-accuracy of the three methods for mixed CPU-GPU embedding model training. | [
"embedding model training",
"asynchronous training",
"convergence",
"graph neural networks",
"GNNs"
] | https://openreview.net/pdf?id=xHQeok6ZWX | wTRUCNuYNz | official_review | 1,717,804,207,820 | xHQeok6ZWX | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission40/Reviewer_oS4U"
] | title: Asynchronous Training with Parameter-Update Order Guarantees
summary: This paper introduces Optimistic Asynchrony Control (OAC), an algorithm for batching training data, using parallel batch transfers and optimistic locking. OAC takes inspiration from optimistic concurrency control methods used in concurrent databases and applies this concept to model training, with the aim of improving model training times, particularly of large models on CPU-GPU systems with significant memory limitations. In the paper, authors describe the workings of the algorithm and provide experimental results.
strengths: * Compelling experimental results show that OAC performs better than some traditional asynchronous training methods, and has properties on par with synchronous training, which supports their theoretical assertions.
* The problem being solved, of efficiently coordinating CPU-GPU training with memory limitations, is quite relevant.
* The concept of applying optimistic locking to model parameters is sound, and seems to have been implemented carefully.
* Good discussion of technical details associated with implementing the algorithm.
weaknesses: * Explanation of the OAC algorithm and its correctness is quite long-winded and relies primarily on small, worked examples.
* No source code available, however it was mentioned that OAC will be added to the open source program Marius, which is promising.
* Only two experiments performed.
confidence: 4
suggestions: * A more concise justification, using more general arguments, of why OAC performs the same parameter updates as synchronous training.
* Pseudocode for core parts of the algorithm, such as management of timestamps and parameter locks.
* More reference and discussion of existing optimistic concurrency control methods, and how OAC relates or differs.
* It would be good to see more experiments done with a variety of learning tasks and models.
* Line 371: "sever" should be "severe". |
xHQeok6ZWX | Optimistic Asynchrony Control: Achieving Synchronous Convergence With Asynchronous Throughput for Embedding Model Training | [
"Roger Waleffe",
"Jason Mohoney"
] | Modern embedding-based machine learning (ML) models can contain hundreds of gigabytes of parameters, often exceeding the capacity of GPU hardware accelerators critical for training. One solution is to use a mixed CPU-GPU setup, where embedding parameters are stored in CPU memory and subsets are repeatedly transferred to the GPU for computation. In this setup two training paradigms exist: synchronous training and asynchronous training. In the former, batches are transferred one by one, leading to low throughput but fast model convergence. In contrast, during asynchronous training batches are transferred in parallel, allowing for more batches to be processed per unit time. Asynchronous training, however, can effect model quality due to concurrent batches which access the same model parameters leading to stale updates. In this work, we present Optimistic Asynchrony Control, a method for allowing asynchronous batch processing while ensuring model equivalence to a synchronous training execution. Our method is inspired by Optimistic Concurrency Control used in database systems. The main idea is to allow parallel processing and transfer of batches from the CPU to the GPU, but to validate each batch on the GPU before the model is updated to ensure that it has the correct values---the values it would have had if batches were processed and transferred one by one. We show that OAC achieves the best of both worlds, retaining the convergence of synchronous training while matching the throughput of asynchronous ML. This allows OAC to achieve the best time-to-accuracy of the three methods for mixed CPU-GPU embedding model training. | [
"embedding model training",
"asynchronous training",
"convergence",
"graph neural networks",
"GNNs"
] | https://openreview.net/pdf?id=xHQeok6ZWX | dgr6v2BUM5 | official_review | 1,718,230,694,435 | xHQeok6ZWX | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission40/Reviewer_o4md"
] | title: Review for paper "Optimistic Asynchrony Control"
summary: This paper proposes a mechanism for asynchronous batch training system between main and accelerator memory, based on Optimistic Asynchrony Control.
Specifically, the authors validate each batch before updates are committed to ensure that updates are atomically altered. Moreover, they provide a caching mechanism on the GPU to enhance throughput.
Evaluation on two graph embedding problems showcase superior time-to-accuracy results compared to the synchronous and asynchronous baselines.
strengths: * The system is clearly motivated and the parallelism to DBMSs seems to intuitively make sense.
* The authors implementation on top of a largely used system (Marius) is a positive aspect. However, open-sourcing their prototype would enhance reproducibility.
* The results of the evaluation showcase a measurable benefit of this techique.
weaknesses: * The paper is overly long in its background, with too much repetition until page 3.
* The overhead of book-keeping has not been measured, which can be sizeable
* The baselines to which the system is compared only have the two ends of the spectrum.
confidence: 3
limitations: * Results of the evaluation do not include variance statistics, which may suggest that experiments were only run once.
* I am not sure whether this technique can generalise across multiple accelerators (and memory hierarchies). At the same time, I am unsure how this technique can be integrated with systems that support RDMA.
* The current system is mainly applied on graph problems, but the title or abstract do not make this specific.
* The authors propose a mechanism for validating the updates to parameters without quantifying the cost of running this. From the information provided, the overhead should be sizeable.
* The authors keep a timestamp per model parameter. This can be detrimental in terms of overall memory consumption. If we add optimiser state parameters, this can further deteriorate.
* The paper states that each batch is validated against every other concurrent batch, which would indicate a quadratic cost.
* It is unclear from the paper how much the conflict rates affect the performance of the system.
* It is further unclear what is the memory overhead of the cache.
suggestions: * I would suggest that certain terms are changed:
- embedding models -> representation learning
- CPU memory -> main memory
* I would urge the authors to also discuss to the energy impact of sync vs. async training.
* The paper would benefit from an algorithmic representation of the system's workflow. |
xHQeok6ZWX | Optimistic Asynchrony Control: Achieving Synchronous Convergence With Asynchronous Throughput for Embedding Model Training | [
"Roger Waleffe",
"Jason Mohoney"
] | Modern embedding-based machine learning (ML) models can contain hundreds of gigabytes of parameters, often exceeding the capacity of GPU hardware accelerators critical for training. One solution is to use a mixed CPU-GPU setup, where embedding parameters are stored in CPU memory and subsets are repeatedly transferred to the GPU for computation. In this setup two training paradigms exist: synchronous training and asynchronous training. In the former, batches are transferred one by one, leading to low throughput but fast model convergence. In contrast, during asynchronous training batches are transferred in parallel, allowing for more batches to be processed per unit time. Asynchronous training, however, can effect model quality due to concurrent batches which access the same model parameters leading to stale updates. In this work, we present Optimistic Asynchrony Control, a method for allowing asynchronous batch processing while ensuring model equivalence to a synchronous training execution. Our method is inspired by Optimistic Concurrency Control used in database systems. The main idea is to allow parallel processing and transfer of batches from the CPU to the GPU, but to validate each batch on the GPU before the model is updated to ensure that it has the correct values---the values it would have had if batches were processed and transferred one by one. We show that OAC achieves the best of both worlds, retaining the convergence of synchronous training while matching the throughput of asynchronous ML. This allows OAC to achieve the best time-to-accuracy of the three methods for mixed CPU-GPU embedding model training. | [
"embedding model training",
"asynchronous training",
"convergence",
"graph neural networks",
"GNNs"
] | https://openreview.net/pdf?id=xHQeok6ZWX | F40k65TlaD | official_review | 1,718,361,567,396 | xHQeok6ZWX | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission40/Reviewer_UTxx"
] | title: Thoughtful algorithmic improvement; tricky implementation; limited experimentation.
summary: The paper introduces an asynchronous optimization algorithm (communication and batch processing are async) for representation learning of word models, graph models, or any other knowledge models. The main contribution is thoroughly design batch cache on GPU-side which provides operations ordering as if all batches were processed synchronously. In order to guarantee proper processing order, the authors introduce vector clock which assigns a timestamp to each training sample (representation vector). Proposed method was verified empirically and demonstrated advatage over synchronous algorithms and asynchronous ones without ordering enforcing.
strengths: Thoroughly design, described in details, and instructive asynchronous algorithm which is determenistic with respect to synchronous counterpart and has better performance and efficiency.
weaknesses: + Bibliography should be reviewed and actualized: capitalization of titles,
missing publication dates, journals conferences, etc (e.g.
(Kipf&Welling,2022) has been published in ICLR
https://openreview.net/forum?id=SJU4ayYgl)
+ Missing table of content in hypertext markup.
+ Some pieces are hard to follow. Please rephrase sentences like below or use
punctuation to logically split parts of the utterance.
> We must ensure that our processing of this batch is equivalent to the
> computation that would have occurred had we waited for the updates {a, c, f
> } → {a0 , c0 , f 0 } to make it back to the CPU.
+ Implementation details are completely unclear since cache eviction procedure
is quite complex. Specifically, it is unclear how exactly the authors
implemented GPU cache. Do they used like two buffers for the new and old
batches? Do they run a specialized CUDA-kernel to merge caches?
Source code could solve the issues if it were attached to the submission.
+ I belive that there is a better experimental setup than link prediction task.
From my perspective, it would be more representative to compare with word2vec
since it is the first algorithm which introduced the notion of async
optimization despite that the original implementation is CPU-only. Also,
word2vec has more sophisticated evaluation protocol which is evaluation of
similarity of word representations in addition to synthetic metrics.
+ The latter point raises another question related to locks and ordering.
Despite the fact word2vec algorithm does not uses any locks to access model
weights, it convergence to meaningful representations. Maybe, we do not need
properly ordered async updates in practice? In this perspective, Figure 2
does not look convincing to me thus more experimentations in different
setups are required.
confidence: 4
suggestions: By design, the algorithm is limited to a specific models for training vector representation. However, it seems that the algorithm could be adopted for rather complex models like Transformers in PEFT setup or MoE models as an example. How do you think? |
xHQeok6ZWX | Optimistic Asynchrony Control: Achieving Synchronous Convergence With Asynchronous Throughput for Embedding Model Training | [
"Roger Waleffe",
"Jason Mohoney"
] | Modern embedding-based machine learning (ML) models can contain hundreds of gigabytes of parameters, often exceeding the capacity of GPU hardware accelerators critical for training. One solution is to use a mixed CPU-GPU setup, where embedding parameters are stored in CPU memory and subsets are repeatedly transferred to the GPU for computation. In this setup two training paradigms exist: synchronous training and asynchronous training. In the former, batches are transferred one by one, leading to low throughput but fast model convergence. In contrast, during asynchronous training batches are transferred in parallel, allowing for more batches to be processed per unit time. Asynchronous training, however, can effect model quality due to concurrent batches which access the same model parameters leading to stale updates. In this work, we present Optimistic Asynchrony Control, a method for allowing asynchronous batch processing while ensuring model equivalence to a synchronous training execution. Our method is inspired by Optimistic Concurrency Control used in database systems. The main idea is to allow parallel processing and transfer of batches from the CPU to the GPU, but to validate each batch on the GPU before the model is updated to ensure that it has the correct values---the values it would have had if batches were processed and transferred one by one. We show that OAC achieves the best of both worlds, retaining the convergence of synchronous training while matching the throughput of asynchronous ML. This allows OAC to achieve the best time-to-accuracy of the three methods for mixed CPU-GPU embedding model training. | [
"embedding model training",
"asynchronous training",
"convergence",
"graph neural networks",
"GNNs"
] | https://openreview.net/pdf?id=xHQeok6ZWX | 2aoC3aqFN9 | decision | 1,718,722,239,751 | xHQeok6ZWX | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
rver7enVfY | An Analytical Approach to Enhancing DNN Efficiency and Accuracy Using Approximate Multiplication | [
"Salar Shakibhamedan",
"Anice Jahanjoo",
"Amin Aminifar",
"Nima Amirafshar",
"Nima TaheriNejad",
"Axel Jantsch"
] | Achieving higher accuracy in Deep Neural Networks (DNNs) often reaches a plateau despite extensive training, retraining, and fine-tuning. This paper introduces an analytical study using approximate multipliers to investigate potential accuracy improvements. Leveraging the principles of the Information Bottleneck (IB) theory, we analyze the enhanced information and feature extraction capabilities provided by approximate multipliers. Through Information Plane (IP) analysis, we gain a detailed understanding of DNN behavior under this approach. Our analysis indicates that this technique can break through existing accuracy barriers while offering computational and energy efficiency benefits. Compared to traditional methods that are computationally intensive, our approach uses less demanding optimization techniques. Additionally, approximate multipliers contribute to reduced energy consumption during both the training and inference phases. Experimental results support the potential of this method, suggesting it is a promising direction for DNN optimization. | [
"Efficient DNN",
"Approximate Multiplier",
"Information Bottleneck",
"Efficient learning/Inference"
] | https://openreview.net/pdf?id=rver7enVfY | e8JXD1i7LT | official_review | 1,718,106,114,880 | rver7enVfY | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission53/Reviewer_kFCP"
] | title: Official Review
summary: The paper uses information theory and genetic algorithm to optimize convolutional neural networks using approximate multiplication, with emphasis a family called signed-carry-disregard-multiplier. When optimizing LeNet (MNIST dataset), they achieve a small accuracy gain while also reducing the estimated energy consumption (assuming the hardware will be implemented).
strengths: * Relevant to the workshop: has to potential to perform energy efficient training (It is not clear to me if the implementation was applied during the back-propagation phase, but it should at least make the forward more efficient)
* Very novel: It's been a while since I saw practical usage for information theory, genetic algorithms, or novel efficient multipliers--- Never-mind a method that combines all three elements.
* Interesting results: Although the gain is small, the method improves both accuracy and efficiency, which is rare.
weaknesses: * Limited experiment: Only applied to a very small network, with limited effect on the accuracy (0.16%) which is considered within the margin of error of neural networks.
* Consequently, I think that some of the strong claims made by the paper (e.g., "achieve accuracy levels previously considered unattainable"), are premature, especially when they are presented as a result of using approximate-multipliers. These statements lack theoretical justifications, since approximate-multipliers aren't expected to be better than exact multipliers.
confidence: 4 |
rver7enVfY | An Analytical Approach to Enhancing DNN Efficiency and Accuracy Using Approximate Multiplication | [
"Salar Shakibhamedan",
"Anice Jahanjoo",
"Amin Aminifar",
"Nima Amirafshar",
"Nima TaheriNejad",
"Axel Jantsch"
] | Achieving higher accuracy in Deep Neural Networks (DNNs) often reaches a plateau despite extensive training, retraining, and fine-tuning. This paper introduces an analytical study using approximate multipliers to investigate potential accuracy improvements. Leveraging the principles of the Information Bottleneck (IB) theory, we analyze the enhanced information and feature extraction capabilities provided by approximate multipliers. Through Information Plane (IP) analysis, we gain a detailed understanding of DNN behavior under this approach. Our analysis indicates that this technique can break through existing accuracy barriers while offering computational and energy efficiency benefits. Compared to traditional methods that are computationally intensive, our approach uses less demanding optimization techniques. Additionally, approximate multipliers contribute to reduced energy consumption during both the training and inference phases. Experimental results support the potential of this method, suggesting it is a promising direction for DNN optimization. | [
"Efficient DNN",
"Approximate Multiplier",
"Information Bottleneck",
"Efficient learning/Inference"
] | https://openreview.net/pdf?id=rver7enVfY | CbMR6vND9I | meta_review | 1,718,695,490,054 | rver7enVfY | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission53/Area_Chair_zCgS"
] | metareview: The overall reviewer sentiment for this submission appears to be positive. The reviewers appreciated the work's novelty and relevance/scope. Experimental results appear to be reasonable, with a slight accuracy improvement on LeNet/MNIST. The main shortcoming of the work, as pointed out by multiple reviewers, is the limited evaluation: a wider study involving bigger networks and more datasets is required to fully understand how well the method generalizes.
Given the relevance and novelty of the work, I'm inclined to give this submission an accept (poster).
recommendation: Accept (Poster)
confidence: 4 |
rver7enVfY | An Analytical Approach to Enhancing DNN Efficiency and Accuracy Using Approximate Multiplication | [
"Salar Shakibhamedan",
"Anice Jahanjoo",
"Amin Aminifar",
"Nima Amirafshar",
"Nima TaheriNejad",
"Axel Jantsch"
] | Achieving higher accuracy in Deep Neural Networks (DNNs) often reaches a plateau despite extensive training, retraining, and fine-tuning. This paper introduces an analytical study using approximate multipliers to investigate potential accuracy improvements. Leveraging the principles of the Information Bottleneck (IB) theory, we analyze the enhanced information and feature extraction capabilities provided by approximate multipliers. Through Information Plane (IP) analysis, we gain a detailed understanding of DNN behavior under this approach. Our analysis indicates that this technique can break through existing accuracy barriers while offering computational and energy efficiency benefits. Compared to traditional methods that are computationally intensive, our approach uses less demanding optimization techniques. Additionally, approximate multipliers contribute to reduced energy consumption during both the training and inference phases. Experimental results support the potential of this method, suggesting it is a promising direction for DNN optimization. | [
"Efficient DNN",
"Approximate Multiplier",
"Information Bottleneck",
"Efficient learning/Inference"
] | https://openreview.net/pdf?id=rver7enVfY | A7VEfYskuJ | official_review | 1,718,222,662,921 | rver7enVfY | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission53/Reviewer_E4tF"
] | title: Good analysis on how to enhance DNN efficiency with approximate multipliers
summary: This paper uses the information plane to track the effect of applying approximate multipliers which simplifies the hardware components to achieve higher efficiency.
strengths: Good motivation and reasonable hypothesis. The authors attempt to use information bottleneck theory to analyze the effect of approximate multiplier which is visually informative over the model structure. It is intuitive that using approximation in LeNet improves the generalization of the models, considering the success of dropout for those models.
weaknesses: As the authors proposed, a wider study of different models and datasets could enhance the conclusion of this paper. There is no theoretical analysis of how the approximate multipliers would affect the information flow, except the fitting and compression phases can be observed empirically. Even though the authors' hypothesis matches the general intuition, the evidence in a single case is not enough to reach a strong conclusion. If it is difficult to quantify the influence of applying an approximate multiplier, a more general case study would make the conclusion more convincing.
How is energy saving defined? In Appendix A.4 some approximate multipliers achieve up to 150% energy saving which is very confusing to me. It's also unclear whether they are measured on real machines or from simulations.
confidence: 3 |
rver7enVfY | An Analytical Approach to Enhancing DNN Efficiency and Accuracy Using Approximate Multiplication | [
"Salar Shakibhamedan",
"Anice Jahanjoo",
"Amin Aminifar",
"Nima Amirafshar",
"Nima TaheriNejad",
"Axel Jantsch"
] | Achieving higher accuracy in Deep Neural Networks (DNNs) often reaches a plateau despite extensive training, retraining, and fine-tuning. This paper introduces an analytical study using approximate multipliers to investigate potential accuracy improvements. Leveraging the principles of the Information Bottleneck (IB) theory, we analyze the enhanced information and feature extraction capabilities provided by approximate multipliers. Through Information Plane (IP) analysis, we gain a detailed understanding of DNN behavior under this approach. Our analysis indicates that this technique can break through existing accuracy barriers while offering computational and energy efficiency benefits. Compared to traditional methods that are computationally intensive, our approach uses less demanding optimization techniques. Additionally, approximate multipliers contribute to reduced energy consumption during both the training and inference phases. Experimental results support the potential of this method, suggesting it is a promising direction for DNN optimization. | [
"Efficient DNN",
"Approximate Multiplier",
"Information Bottleneck",
"Efficient learning/Inference"
] | https://openreview.net/pdf?id=rver7enVfY | 7bELALdPhH | decision | 1,718,721,542,255 | rver7enVfY | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
qqVAsSh3Gc | Memory and Bandwidth are All You Need for Fully Sharded Data Parallel | [
"Jiangtao Wang",
"Jan Ebert",
"Oleg Filatov",
"Stefan Kesselheim"
] | Transformer models have revolutionized a wide spectrum of disciplines, especially in language processing. The recent success has proven that model size scalability is crucial for achieving superior performance metrics. However, training large transformer models is challenging even on modern hardware with powerful GPUs and high-speed interconnects. Existing studies primarily focus on optimizing model training distribution strategies to minimize memory footprint and enhance training speed, often overlooking the scalability challenges related to model size and hardware constraints. To address this oversight, we thoroughly investigate computational, memory, and network demands of training large transformers using the Fully Sharded Data Parallel (FSDP) distributed strategy across different hardware clusters. We explore the intricate relationships between model size and hardware setups to identify configurations that ensure maximum model and hardware efficiency, effective sequence length management, and optimal training throughput.
A significant finding of our study is the critical interplay of the cluster's connection bandwidth and GPU memory size compared to the computational performance of GPUs. This interplay limits training efficiency, underscoring the role of both hardware characteristics as a possible bottleneck. By integrating theoretical analysis with simulations and empirical tests, we demonstrate how hardware limitations affect training efficacy, identifying key hardware thresholds and the impact of network connectivity. Our findings prompt a reassessment of training strategies guiding users on the way to finding hardware-optimal FSDP configurations, enhancing training efficiency for large-scale transformer models. | [
"Large Language Model",
"Distribution Training",
"Fully Sharded Data Parallel"
] | https://openreview.net/pdf?id=qqVAsSh3Gc | tujGWQMqmk | meta_review | 1,718,702,192,426 | qqVAsSh3Gc | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission18/Area_Chair_a3Gp"
] | metareview: **Strengths**
- The paper studies the performance impact of accelerator memory capacity and network bandwidth on FSDP, which is an important training technique for the community.
- The combination of analytical and empirical methods helps to efficiently explore a wider range of hardware scenarios.
- The evaluation section covers a decent range of factors such as hardware sizes, model sizes, batch sizes, and sequence lengths.
**Weaknesses**
- The paper does not provide new insight or recommendation beyond the already known fact that FSDP throughput is affected memory and network bandwidth.
- While the evaluation contains a lot of experimental results, it is difficult to understand because setups are not explained to enable reproduction and results are not carefully explained.
- Although the evaluation section includes the simulation results, the effectiveness of the simulated approach is not discussed to help understand the appropriateness for this scenario.
**Summary**
The common feedback is that the paper contains a lot of interesting materials that are insufficiently explained or that lack a main takeaway for the reader. It might be better for the paper to focus on less results and more explanation.
recommendation: Reject
confidence: 4 |
qqVAsSh3Gc | Memory and Bandwidth are All You Need for Fully Sharded Data Parallel | [
"Jiangtao Wang",
"Jan Ebert",
"Oleg Filatov",
"Stefan Kesselheim"
] | Transformer models have revolutionized a wide spectrum of disciplines, especially in language processing. The recent success has proven that model size scalability is crucial for achieving superior performance metrics. However, training large transformer models is challenging even on modern hardware with powerful GPUs and high-speed interconnects. Existing studies primarily focus on optimizing model training distribution strategies to minimize memory footprint and enhance training speed, often overlooking the scalability challenges related to model size and hardware constraints. To address this oversight, we thoroughly investigate computational, memory, and network demands of training large transformers using the Fully Sharded Data Parallel (FSDP) distributed strategy across different hardware clusters. We explore the intricate relationships between model size and hardware setups to identify configurations that ensure maximum model and hardware efficiency, effective sequence length management, and optimal training throughput.
A significant finding of our study is the critical interplay of the cluster's connection bandwidth and GPU memory size compared to the computational performance of GPUs. This interplay limits training efficiency, underscoring the role of both hardware characteristics as a possible bottleneck. By integrating theoretical analysis with simulations and empirical tests, we demonstrate how hardware limitations affect training efficacy, identifying key hardware thresholds and the impact of network connectivity. Our findings prompt a reassessment of training strategies guiding users on the way to finding hardware-optimal FSDP configurations, enhancing training efficiency for large-scale transformer models. | [
"Large Language Model",
"Distribution Training",
"Fully Sharded Data Parallel"
] | https://openreview.net/pdf?id=qqVAsSh3Gc | tQ6mi641EZ | official_review | 1,718,328,241,531 | qqVAsSh3Gc | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission18/Reviewer_pKvH"
] | title: Review for Memory and Bandwidth are All Your Need for Fully Sharded Data Parallel
summary: The paper investigates the computational, memory, and network demands of utilizing FSDP for training tasks of LLMs. The authors explore the relationships between model size and hardware setups to find out the optimal configurations that can lead to maximum model and hardware efficiency, effective sequence length management, and optimal training throughput. The authors mainly found that the interplay of cluster's connection bandwidth and GPU memory size plays an important role in training efficiency. The analysis is completed in both theoretical and experimental ways. Detailed experiment results are also presented.
strengths: The theoretical and experimental analysis is detailed. The experimental results are clear and support the conclusion.
weaknesses: The paper only investigates the scenario of using FSDP while in reality, tensor parallelism, sequence parallelism, and other acceleration techniques are used. It would be better if the investigation could be extended.
confidence: 3 |
qqVAsSh3Gc | Memory and Bandwidth are All You Need for Fully Sharded Data Parallel | [
"Jiangtao Wang",
"Jan Ebert",
"Oleg Filatov",
"Stefan Kesselheim"
] | Transformer models have revolutionized a wide spectrum of disciplines, especially in language processing. The recent success has proven that model size scalability is crucial for achieving superior performance metrics. However, training large transformer models is challenging even on modern hardware with powerful GPUs and high-speed interconnects. Existing studies primarily focus on optimizing model training distribution strategies to minimize memory footprint and enhance training speed, often overlooking the scalability challenges related to model size and hardware constraints. To address this oversight, we thoroughly investigate computational, memory, and network demands of training large transformers using the Fully Sharded Data Parallel (FSDP) distributed strategy across different hardware clusters. We explore the intricate relationships between model size and hardware setups to identify configurations that ensure maximum model and hardware efficiency, effective sequence length management, and optimal training throughput.
A significant finding of our study is the critical interplay of the cluster's connection bandwidth and GPU memory size compared to the computational performance of GPUs. This interplay limits training efficiency, underscoring the role of both hardware characteristics as a possible bottleneck. By integrating theoretical analysis with simulations and empirical tests, we demonstrate how hardware limitations affect training efficacy, identifying key hardware thresholds and the impact of network connectivity. Our findings prompt a reassessment of training strategies guiding users on the way to finding hardware-optimal FSDP configurations, enhancing training efficiency for large-scale transformer models. | [
"Large Language Model",
"Distribution Training",
"Fully Sharded Data Parallel"
] | https://openreview.net/pdf?id=qqVAsSh3Gc | XIVacSJCFE | official_review | 1,718,062,124,551 | qqVAsSh3Gc | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission18/Reviewer_J3f6"
] | title: A Comprehensive Analysis of FSDP training efficiency
summary: This paper presents an in-depth training efficiency analysis of the Fully Sharded Data Parallel (FSDP) training strategy for large-scale transformer models, focusing on the impact of GPU memory and network bandwidth on training efficiency. It offers a theoretical analysis of maximum hardware thresholds and the role inter-node network bandwidth plays in this context. It supports its theoretical work with experiments on A100 clusters with varying inter-node network bandwidths for models up to 175B parameters and uses simulations to assess even larger model sizes.
strengths: - The paper provides a thorough investigation combining theoretical, simulated, and empirical methods to assess FSDP training efficiency across various hardware configurations.
- It evaluates a wide range of transformer model sizes, from 1.3B to 310B parameters
- The study highlights the critical role of network bandwidth, showing that better inter-node communication can significantly improve training efficiency for FSDP.
weaknesses: - More detailed descriptions of the experiment setup, such as the specific framework used for FSDP beyond mentioning PyTorch and CUDA versions, could improve the reproducibility of the results.
- The main contributions could be more explicitly listed in the introduction to improve clarity and focus for the readers.
- Sharing practical implications would be great, especially regarding when to use FSDP and when not to, based on the observations.
confidence: 3
limitations: - The lack of detailed experimental setup information might hinder the reproducibility of the experimental results (beyond the theoretical contributions)
suggestions: - Correct the title's grammatical error: “Your” should be replaced with “You.”
- Include practical insights. Based on the poor results of FSDP with low network bandwidth inter-node communication, should we rather use 3D parallelism with ZeRO-1 in these cases?
- Clearly list the main contributions in the introduction to improve the structure |
qqVAsSh3Gc | Memory and Bandwidth are All You Need for Fully Sharded Data Parallel | [
"Jiangtao Wang",
"Jan Ebert",
"Oleg Filatov",
"Stefan Kesselheim"
] | Transformer models have revolutionized a wide spectrum of disciplines, especially in language processing. The recent success has proven that model size scalability is crucial for achieving superior performance metrics. However, training large transformer models is challenging even on modern hardware with powerful GPUs and high-speed interconnects. Existing studies primarily focus on optimizing model training distribution strategies to minimize memory footprint and enhance training speed, often overlooking the scalability challenges related to model size and hardware constraints. To address this oversight, we thoroughly investigate computational, memory, and network demands of training large transformers using the Fully Sharded Data Parallel (FSDP) distributed strategy across different hardware clusters. We explore the intricate relationships between model size and hardware setups to identify configurations that ensure maximum model and hardware efficiency, effective sequence length management, and optimal training throughput.
A significant finding of our study is the critical interplay of the cluster's connection bandwidth and GPU memory size compared to the computational performance of GPUs. This interplay limits training efficiency, underscoring the role of both hardware characteristics as a possible bottleneck. By integrating theoretical analysis with simulations and empirical tests, we demonstrate how hardware limitations affect training efficacy, identifying key hardware thresholds and the impact of network connectivity. Our findings prompt a reassessment of training strategies guiding users on the way to finding hardware-optimal FSDP configurations, enhancing training efficiency for large-scale transformer models. | [
"Large Language Model",
"Distribution Training",
"Fully Sharded Data Parallel"
] | https://openreview.net/pdf?id=qqVAsSh3Gc | USevinVKLp | official_review | 1,718,202,011,813 | qqVAsSh3Gc | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission18/Reviewer_1Tey"
] | title: This paper is relevant for the workshop and the research community. It is based on extensive simulations and experiments but brings few new insights.
summary: The paper lies at the intersection between transformer model training using the Fully Sharded Data Parallel (FSDP) distributed strategy and the hardware configuration. It provides insights on various training performance metrics and how they can be improved regarding model size, interconnect speed and bandwidth, and GPU memory size. They show that these parameters can have a significant impact on the performance and correctly dimensioning the infrastructure has importance.
The study is based on simulation and experiments. The simulation allows more model size and GPUs while the experiment validates the simulation on smaller setups.
strengths: Large transformer-based language models are widely spread today and are applied to a large range of applications. Their training consumes a lot of resources (energy/carbon and money) thus improving their efficiency can have a significant impact if training is necessary. Applications require different parameter sizes so having pointers on how to dimension the infrastructure can be interesting for the community.
The article is based on both theoretical analysis and experiments. Simulation allows the authors to investigate models with up to 310 billion parameters and experiments are done with various levels of interconnect speed and various number of GPUs.
Both simulation and experiments provide interesting insights into how the model size, the sequence length, and the batch size along with hardware configuration like the interconnect bandwidth and the number of GPUs impact the model utilization and the throughput.
The followed methodology and the settings are clear.
weaknesses: The paper is highly dense with many figures and equations that are not explained enough from my point of view.
The authors highlight the constraints due to interconnect bandwidth and GPU memory but fail to provide more recommendations than increasing the bandwidth which is usually a hard hardware constraint.
confidence: 4
limitations: Clarity of the paper (too dense, some parts would require more explanations)
suggestions: There are several writing mistakes in the article:
- In the title "are all you need" not "your"
- The word Appendix is missing in section 3.1
- the acronym "ctx" is not defined (Figures 2 and 3)
- What does "troppo" mean? Section 3.2.2
- The acronym FSDP should be defined in the introduction too.
Additionally, Figure 1 could be moved closer to where it is referenced in the text. The appendix is huge and not self-explanatory. You should reference it more in the article.
Please provide more explanation on the equations. The understanding of the paper would be greatly improved. For example, why you chose some multiplication factors (12, 16) is not explicitly explained. You could add a diagram of the Transformer architecture with corresponding letters from the equations. Equations 13 and 14 come out of nowhere. Reference to the appendix should be made if proofs are there. |
qqVAsSh3Gc | Memory and Bandwidth are All You Need for Fully Sharded Data Parallel | [
"Jiangtao Wang",
"Jan Ebert",
"Oleg Filatov",
"Stefan Kesselheim"
] | Transformer models have revolutionized a wide spectrum of disciplines, especially in language processing. The recent success has proven that model size scalability is crucial for achieving superior performance metrics. However, training large transformer models is challenging even on modern hardware with powerful GPUs and high-speed interconnects. Existing studies primarily focus on optimizing model training distribution strategies to minimize memory footprint and enhance training speed, often overlooking the scalability challenges related to model size and hardware constraints. To address this oversight, we thoroughly investigate computational, memory, and network demands of training large transformers using the Fully Sharded Data Parallel (FSDP) distributed strategy across different hardware clusters. We explore the intricate relationships between model size and hardware setups to identify configurations that ensure maximum model and hardware efficiency, effective sequence length management, and optimal training throughput.
A significant finding of our study is the critical interplay of the cluster's connection bandwidth and GPU memory size compared to the computational performance of GPUs. This interplay limits training efficiency, underscoring the role of both hardware characteristics as a possible bottleneck. By integrating theoretical analysis with simulations and empirical tests, we demonstrate how hardware limitations affect training efficacy, identifying key hardware thresholds and the impact of network connectivity. Our findings prompt a reassessment of training strategies guiding users on the way to finding hardware-optimal FSDP configurations, enhancing training efficiency for large-scale transformer models. | [
"Large Language Model",
"Distribution Training",
"Fully Sharded Data Parallel"
] | https://openreview.net/pdf?id=qqVAsSh3Gc | 4ilSmfuouU | decision | 1,718,724,055,827 | qqVAsSh3Gc | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: After a thorough evaluation of the paper and the feedback provided by reviewers and a meta-reviewer, we have made the decision to accept the paper. Our decision is motivated by the paper's relevance to one of the core topics of the workshop. The program chairs believe that the paper presents valuable results that are pertinent to the workshop's objectives, and we are eager to foster discussions and research advancements in this area. However, we must stress the imperative need for substantial improvements in the paper's presentation. Please take into account all reviewers' suggested improvements. Congratulations and hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
qD2eFNvtw4 | Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs | [
"Ashwinee Panda",
"Berivan Isik",
"Xiangyu Qi",
"Sanmi Koyejo",
"Tsachy Weissman",
"Prateek Mittal"
] | Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights--causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time. To mitigate this, we propose Lottery Ticket Adaptation (LoTA), a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA), and maintains good performance even after training on other tasks -- thus, avoiding catastrophic forgetting. By extracting and fine-tuning over \emph{lottery tickets} (or \emph{sparse task vectors}), LoTA also enables model merging over highly dissimilar tasks. | [
"lottery ticket",
"catastrophic forgetting",
"safety",
"model merging",
"sparsity",
"large language models"
] | https://openreview.net/pdf?id=qD2eFNvtw4 | hpclwjeWOK | official_review | 1,718,329,151,968 | qD2eFNvtw4 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission23/Reviewer_zP6K"
] | title: LoTA offers a novel, efficient solution to mitigate destructive interference in LLMs, but needs clearer methodology and better reproducibility.
summary: The authors introduce Lottery Ticket Adaptation (LoTA), a sparse adaptation method that identifies and optimizes a sparse subnetwork within a large language model (LLM). This method aims to mitigate destructive interference between tasks during multi-task adaptation. LoTA outperforms full fine-tuning (FFT) and low-rank adaptation (LoRA) in various tasks such as instruction following, reasoning, math, and summarization, preventing catastrophic forgetting and enabling efficient model merging. LoTA uses sparse task vectors, making it memory-efficient compared to other methods. However, the computational complexity introduced during mask calibration needs careful evaluation.
Sparse Adaptation: LoTA involves three phases:
1. Mask Calibration - initial full fine-tuning to identify significant weight updates.
2. Mask Extraction - derivation of a sparsity mask from the updates, isolating the sparse subnetwork relevant to the task.
3. Sparse Fine-Tuning - re-application of the sparsity mask and fine-tuning the identified subnetwork while freezing other parameters.
Finally, with sequential training and model merging, LoTA addresses challenges in sequential training by preventing catastrophic forgetting and improves model merging by using mutually sparse task vectors.
In terms of results, the author show that on instruction following, their method achieved a win rate of 19.0%, matching FFT and surpassing LoRA (15.3%). Also, the show mitigation of catastrophic forgetting, where their method maintained a higher win rate (17.7%) compared to FFT (15.2%) when adapting from instruction following to GSM8k. Last, for model merging they achieved better task-average performance compared to existing methods, with a performance of 38.5% in a merged model scenario.
strengths: * LoTA's sparse adaptation method is highly innovative and addresses critical challenges in multi-task learning and model merging.
* The paper includes extensive experiments across multiple tasks, validating the effectiveness of LoTA. The results consistently demonstrate LoTA's superior performance.
* The method's efficiency and ability to prevent catastrophic forgetting make it highly practical for real-world applications, especially in dynamic environments.
weaknesses: * The supplementary materials are lacking in detail, particularly regarding hyperparameter settings and code availability. This hinders reproducibility and understanding.
* The LoTTO component introduces additional computational complexity, which could be a drawback for some applications. The paper should discuss potential optimizations to reduce this overhead.
* While the paper is sound, a deeper exploration of the theoretical underpinnings and potential mathematical optimizations of the sparsity mask calibration process would be beneficial. For instance, a more rigorous analysis of the sparsity patterns and their impact on model performance could provide valuable insights.
confidence: 5
limitations: * The provided information is not sufficient for full replication of the experiments. The absence of exact code and dataset details in the supplementary materials limits the ability to reproduce the results.
* The mask calibration phase introduces significant computational overhead, which may limit the practicality of LoTA in resource-constrained environments. The additional steps required for sparse fine-tuning and iterative mask calibration can be computationally intensive.
* The paper lacks detailed explanations, particularly in the mask extraction and application processes. A more thorough breakdown of these steps, including mathematical formulations and justifications, would improve comprehension.
suggestions: * Future research should explore optimizing the sparsity mask calibration process to reduce computational overhead and enhance performance. Investigating the application of LoTA in other domains beyond language models could further demonstrate its versatility.
* Broadening the application scope to include other types of neural networks and tasks could extend LoTA's impact across different areas of machine learning. |
qD2eFNvtw4 | Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs | [
"Ashwinee Panda",
"Berivan Isik",
"Xiangyu Qi",
"Sanmi Koyejo",
"Tsachy Weissman",
"Prateek Mittal"
] | Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights--causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time. To mitigate this, we propose Lottery Ticket Adaptation (LoTA), a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA), and maintains good performance even after training on other tasks -- thus, avoiding catastrophic forgetting. By extracting and fine-tuning over \emph{lottery tickets} (or \emph{sparse task vectors}), LoTA also enables model merging over highly dissimilar tasks. | [
"lottery ticket",
"catastrophic forgetting",
"safety",
"model merging",
"sparsity",
"large language models"
] | https://openreview.net/pdf?id=qD2eFNvtw4 | eDWbvZiOoB | decision | 1,718,651,521,252 | qD2eFNvtw4 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Oral)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
qD2eFNvtw4 | Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs | [
"Ashwinee Panda",
"Berivan Isik",
"Xiangyu Qi",
"Sanmi Koyejo",
"Tsachy Weissman",
"Prateek Mittal"
] | Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights--causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time. To mitigate this, we propose Lottery Ticket Adaptation (LoTA), a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA), and maintains good performance even after training on other tasks -- thus, avoiding catastrophic forgetting. By extracting and fine-tuning over \emph{lottery tickets} (or \emph{sparse task vectors}), LoTA also enables model merging over highly dissimilar tasks. | [
"lottery ticket",
"catastrophic forgetting",
"safety",
"model merging",
"sparsity",
"large language models"
] | https://openreview.net/pdf?id=qD2eFNvtw4 | Vb7CofzRqB | meta_review | 1,718,629,289,899 | qD2eFNvtw4 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission23/Area_Chair_Bx7p"
] | metareview: The paper proposes a new sparse adaptation method that searches for sparse subnetwork within a large model. The approach outperforms both LoRA adapters and full finetuning on the selected datasets and tasks. Both reviewers acknowledge the paper's novelty and are overall positive about the work. The AC agrees and recommends for acceptance.
recommendation: Accept (Oral)
confidence: 5 |
qD2eFNvtw4 | Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs | [
"Ashwinee Panda",
"Berivan Isik",
"Xiangyu Qi",
"Sanmi Koyejo",
"Tsachy Weissman",
"Prateek Mittal"
] | Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights--causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time. To mitigate this, we propose Lottery Ticket Adaptation (LoTA), a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA), and maintains good performance even after training on other tasks -- thus, avoiding catastrophic forgetting. By extracting and fine-tuning over \emph{lottery tickets} (or \emph{sparse task vectors}), LoTA also enables model merging over highly dissimilar tasks. | [
"lottery ticket",
"catastrophic forgetting",
"safety",
"model merging",
"sparsity",
"large language models"
] | https://openreview.net/pdf?id=qD2eFNvtw4 | 8dJ6dW9kkT | official_review | 1,718,294,913,324 | qD2eFNvtw4 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission23/Reviewer_FYuF"
] | title: Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
summary: The paper introduces a novel method called Lottery Ticket Adaptation (LoTA) to address catastrophic forgetting and destructive interference when a model is finetuned on a new task. LoTA consists of three phases: mask calibration, mask extraction, and sparse adaptation. In the mask calibration step, the model is finetuned on the new task, converting the original parameters Wp to new parameters Wf. In the mask extraction phase, the difference between Wf and Wp is extracted as sparse matrix M. In the sparse adaptation phase, the extracted sparse matrix is used to create a subnetwork Wp * M. Only this subnetwork is finetuned on the new task. LoTA performs better than full fine-tuning and LoRA in mitigating destructive interference in different multi-task adaptation paradigms and also in preventing catastrophic forgetting of earlier tasks.
strengths: - LoTA is a novel contribution that is effective across three major multi-task adaptation methods: storing and loading adapters, sequential training, and model merging.
- The paper is well-written.
- The paper provides thorough explanations and results for each adaptation approach.
- Most of the claims are well-supported by extensive experiments, evaluating two models on six different tasks across three multi-task adaptation paradigms.
weaknesses: - LoTA requires extra time and resources to obtain the sparse matrix, which can limit its usability. While the paper acknowledges this issue, provides some solutions like using masks transferred from other datasets, and mentions that it will not add more than a few hours of compute time on a single GPU, it remains an open problem as the number of tasks increases.
- Results for single-task experiments (Shown in Table 1) are supported by both models LLama-3 and Mistral. It would be important to see if - - LLama-3 shows similar patterns in sequential training and model merging experiments too. This would enhance the robustness of the findings across models.
- The paper will benefit by addressing the effectiveness of LoTA on more than two task settings. For example, how well LoTA mitigates catastrophic forgetting for Task A, B when fine-tuned for task C.
- In Table 2, it would make the results more complete if combinations like LoRA-LoRA and LoRA-FFT were added to the GSM8K results.
- Including related work on lottery ticket hypotheses in the main paper would provide additional context for readers.
confidence: 3
suggestions: - Page 1 right column, remove ‘[leftmargin=15pt]’.
- Page 3, line 113, right column. `Lottery Ticket Adaptation (LoTA). LoTA works in two phases as summarized in Figure 2: (1) mask calibration, (2) mask extraction, (3) sparse adaptation.`. → replace `two phases` with `three phases`. |
pW4MmsnVRq | Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity | [
"Wentao Guo",
"Jikai Long",
"Yimeng Zeng",
"Zirui Liu",
"Xinyu Yang",
"Yide Ran",
"Jacob R. Gardner",
"Osbert Bastani",
"Christopher De Sa",
"Xiaodong Yu",
"Beidi Chen",
"Zhaozhuo Xu"
] | Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language Models using only forward passes. However, the application of ZO fine-tuning in memory-constrained settings such as mobile phones and laptops is still challenging since full precision forward passes are infeasible. In this study, we address this limitation by integrating sparsity and quantization into ZO fine-tuning of LLMs. Specifically, we investigate the feasibility of fine-tuning an extremely small subset of LLM parameters using ZO. This approach allows the majority of un-tuned parameters to be quantized to accommodate the constraints of limited device memory. Our findings reveal that the pre-training process can identify a set of "sensitive parameters" that can guide the ZO fine-tuning of LLMs on downstream tasks. Our results demonstrate that fine-tuning 0.1% sensitive parameters in the LLM with ZO can outperform the full ZO fine-tuning performance, while offering wall-clock time speedup. Additionally, we show that ZO fine-tuning targeting these 0.1% sensitive parameters, combined with 4 bit quantization, enables efficient ZO fine-tuning of an Llama2-7B model on a GPU device with less than 8GiB of memory and notably reduced latency. | [
"Zeroth-order optimization",
"LLM fine-tuning",
"Quantization-aware training"
] | https://openreview.net/pdf?id=pW4MmsnVRq | pNoch77fVT | official_review | 1,718,330,245,565 | pW4MmsnVRq | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission5/Reviewer_Q9VR"
] | title: Presents a novel and efficient method for fine-tuning large language models using zeroth-order optimization with extreme sparsity and quantization,
summary: The paper proposes a novel method for fine-tuning large language models (LLMs) using zeroth-order optimization (ZO), which only requires forward passes and is thus memory-efficient. The method integrates sparsity and quantization to enable fine-tuning on memory-constrained devices. The approach identifies and fine-tunes only 0.1% of the most sensitive parameters, achieving performance comparable to full fine-tuning while reducing memory usage and computational requirements. The authors introduce a method to fine-tune LLMs using ZO optimization combined with extreme sparsity and quantization. They show the identification of sensitive parameters that guide efficient ZO fine-tuning. Also, they demonstrate the feasibility of fine-tuning LLMs on devices with limited memory by quantizing non-sensitive parameters. Finally, extensive experiments showing the method's competitive performance across various LLMs and tasks.
strengths: * The paper presents a novel integration of ZO optimization with extreme sparsity and quantization, which has not been extensively explored in the context of LLM fine-tuning.
* The approach is innovative in addressing memory constraints on edge devices, a growing area of interest as LLMs become more ubiquitous.
* The methods and algorithms proposed are well-justified and grounded in theoretical principles.
* Theoretical claims are supported by both theoretical analysis and empirical evidence, though some assumptions (e.g., sensitive parameter selection) might benefit from further empirical validation.
* The extensive experimental validation demonstrates the method's effectiveness across various models and tasks.
* The paper is well-written and generally easy to follow, with clear motivations and methodology.
* Figures and tables are well-presented and support the text effectively.
* The potential impact on the field is significant, particularly for deploying LLMs on edge devices with constrained resources.
* Contributions are important for advancing the state-of-the-art in efficient LLM fine-tuning and on-device personalization.
* Practical implications are well-discussed, though real-world applications and deployments would further illustrate the method's effectiveness.
* The experiments appear reproducible based on the provided information, but access to the datasets and code would enhance transparency.
weaknesses: * The paper could provide more detailed supplementary materials to facilitate replication efforts by other researchers.
* The experiments are comprehensive, though more diverse datasets and LLM architectures could strengthen the findings.
* Some sections could benefit from additional explanations, particularly around the theoretical foundations of ZO optimization.
* Practical implications are well-discussed, though real-world applications and deployments would further illustrate the method's effectiveness.
* Some theoretical aspects, particularly around sensitive parameter selection, could benefit from further empirical validation.
confidence: 5
limitations: * The proposed method has strong potential for practical applications, particularly in the deployment of LLMs on mobile and edge devices.
* Conduct a detailed analysis of the method's scalability, particularly in terms of computational and memory requirements. Discuss how the approach scales with different model sizes and how it can be adapted for even larger models or more constrained environments.
suggestions: * Future work could explore further optimizations and extensions of the method, such as adaptive sparsity levels or dynamic parameter selection during fine-tuning.
* Conduct ablation studies to better understand the impact of each component of the proposed method. For instance, isolate the effects of sparsity, quantization, and ZO optimization to assess their individual contributions to the overall performance.
* Include more comprehensive comparisons with state-of-the-art fine-tuning methods, including both memory-efficient techniques and traditional methods. This would provide a clearer benchmark for evaluating the proposed method's relative performance. |
pW4MmsnVRq | Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity | [
"Wentao Guo",
"Jikai Long",
"Yimeng Zeng",
"Zirui Liu",
"Xinyu Yang",
"Yide Ran",
"Jacob R. Gardner",
"Osbert Bastani",
"Christopher De Sa",
"Xiaodong Yu",
"Beidi Chen",
"Zhaozhuo Xu"
] | Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language Models using only forward passes. However, the application of ZO fine-tuning in memory-constrained settings such as mobile phones and laptops is still challenging since full precision forward passes are infeasible. In this study, we address this limitation by integrating sparsity and quantization into ZO fine-tuning of LLMs. Specifically, we investigate the feasibility of fine-tuning an extremely small subset of LLM parameters using ZO. This approach allows the majority of un-tuned parameters to be quantized to accommodate the constraints of limited device memory. Our findings reveal that the pre-training process can identify a set of "sensitive parameters" that can guide the ZO fine-tuning of LLMs on downstream tasks. Our results demonstrate that fine-tuning 0.1% sensitive parameters in the LLM with ZO can outperform the full ZO fine-tuning performance, while offering wall-clock time speedup. Additionally, we show that ZO fine-tuning targeting these 0.1% sensitive parameters, combined with 4 bit quantization, enables efficient ZO fine-tuning of an Llama2-7B model on a GPU device with less than 8GiB of memory and notably reduced latency. | [
"Zeroth-order optimization",
"LLM fine-tuning",
"Quantization-aware training"
] | https://openreview.net/pdf?id=pW4MmsnVRq | QBP8YFx6T9 | official_review | 1,718,362,550,123 | pW4MmsnVRq | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission5/Reviewer_HBxv"
] | title: Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
summary: The paper combines quantization and extreme sparsity with zeroth-order optimization to reduce memory consumption during fine-tuning of large language models (LLMs). This method fine-tunes a very small subset of LLM parameters using only the forward pass, allowing the majority of untuned LLM parameters to remain in a quantized state. By integrating these techniques, the proposed method enables fine-tuning models on devices with limited memory. The study uses a sensitive sparse parameters mask to identify an extremely small subset of the model's parameters for fine-tuning. The study found that fine-tuning just 0.1% of the LLM's sensitive parameters using ZO optimization outperforms full ZO fine-tuning while saving on wall-clock time.
strengths: - The study's motivation is clearly defined, emphasizing the need for memory-efficient fine-tuning of large language models (LLMs) on edge devices.
- The paper introduces a novel approach by combining zeroth-order (ZO) optimization with extreme sparsity and quantization. This unique combination reduces memory usage and computational costs, making it possible to fine-tune LLMs on devices with limited resources.
- The paper provides enough empirical evidence on how different sparse matrices as well as the percentage of LLM parameters to optimize, affect the ZO finetuning results.
- The results in the paper are supported by extensive experiments, making it a solid work and impactful contribution to the research community.
- The method was validated on the edge device. By enabling LLM fine-tuning on-edge devices, the proposed method has significant practical implications. It opens up new possibilities for deploying powerful LLMs in resource-constrained environments, expanding their usability and accessibility.
weaknesses: - The method requires identifying the sensitive parameters for each task. It is important to understand how much memory and time is required for this step.
- Figures 5 and 6: Adding the result or discussion on the result for other tasks (e.g. SST-2, CB, BoolQ, and WSC) and models (e.g. Mistral-7B and OPT-6.7B) will provide a more comprehensive result.
confidence: 3
suggestions: - The text will need proper spacing at some lines to allow easy reading. e.g. Line 144 right column, Line 179. |
pW4MmsnVRq | Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity | [
"Wentao Guo",
"Jikai Long",
"Yimeng Zeng",
"Zirui Liu",
"Xinyu Yang",
"Yide Ran",
"Jacob R. Gardner",
"Osbert Bastani",
"Christopher De Sa",
"Xiaodong Yu",
"Beidi Chen",
"Zhaozhuo Xu"
] | Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language Models using only forward passes. However, the application of ZO fine-tuning in memory-constrained settings such as mobile phones and laptops is still challenging since full precision forward passes are infeasible. In this study, we address this limitation by integrating sparsity and quantization into ZO fine-tuning of LLMs. Specifically, we investigate the feasibility of fine-tuning an extremely small subset of LLM parameters using ZO. This approach allows the majority of un-tuned parameters to be quantized to accommodate the constraints of limited device memory. Our findings reveal that the pre-training process can identify a set of "sensitive parameters" that can guide the ZO fine-tuning of LLMs on downstream tasks. Our results demonstrate that fine-tuning 0.1% sensitive parameters in the LLM with ZO can outperform the full ZO fine-tuning performance, while offering wall-clock time speedup. Additionally, we show that ZO fine-tuning targeting these 0.1% sensitive parameters, combined with 4 bit quantization, enables efficient ZO fine-tuning of an Llama2-7B model on a GPU device with less than 8GiB of memory and notably reduced latency. | [
"Zeroth-order optimization",
"LLM fine-tuning",
"Quantization-aware training"
] | https://openreview.net/pdf?id=pW4MmsnVRq | PhiNwibvOa | meta_review | 1,718,408,156,458 | pW4MmsnVRq | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission5/Area_Chair_VDkC"
] | metareview: The reviewers are positive about the paper and its merits. I agree with their assessment and recommend acceptance.
I would also suggest and encourage the following:
1. It would be very helpful to explain the GPU memory demand for ZO fine-tuning in detail. (As reviewers suggested, to improve memory management in future work)
2. For 4-bit quantization on edge devices, since some of them support INT8, it would be worth exploring how much the proposed methods could benefit from INT8 in future work.
The author should clarify the meaning of "with Float16 datatype" and "16 bit". As (for throughput) official Flash Attention 2 does not support fp16, and (for fine-tuning) not all (edge) devices support bf16.
recommendation: Accept (Poster)
confidence: 4 |
pW4MmsnVRq | Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity | [
"Wentao Guo",
"Jikai Long",
"Yimeng Zeng",
"Zirui Liu",
"Xinyu Yang",
"Yide Ran",
"Jacob R. Gardner",
"Osbert Bastani",
"Christopher De Sa",
"Xiaodong Yu",
"Beidi Chen",
"Zhaozhuo Xu"
] | Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language Models using only forward passes. However, the application of ZO fine-tuning in memory-constrained settings such as mobile phones and laptops is still challenging since full precision forward passes are infeasible. In this study, we address this limitation by integrating sparsity and quantization into ZO fine-tuning of LLMs. Specifically, we investigate the feasibility of fine-tuning an extremely small subset of LLM parameters using ZO. This approach allows the majority of un-tuned parameters to be quantized to accommodate the constraints of limited device memory. Our findings reveal that the pre-training process can identify a set of "sensitive parameters" that can guide the ZO fine-tuning of LLMs on downstream tasks. Our results demonstrate that fine-tuning 0.1% sensitive parameters in the LLM with ZO can outperform the full ZO fine-tuning performance, while offering wall-clock time speedup. Additionally, we show that ZO fine-tuning targeting these 0.1% sensitive parameters, combined with 4 bit quantization, enables efficient ZO fine-tuning of an Llama2-7B model on a GPU device with less than 8GiB of memory and notably reduced latency. | [
"Zeroth-order optimization",
"LLM fine-tuning",
"Quantization-aware training"
] | https://openreview.net/pdf?id=pW4MmsnVRq | IQheygQNEC | decision | 1,718,651,024,148 | pW4MmsnVRq | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
pICSfWkJIk | DiLoCo: Distributed Low-Communication Training of Language Models | [
"Arthur Douillard",
"Qixuan Feng",
"Andrei Alex Rusu",
"Rachita Chhaparia",
"Yani Donchev",
"Adhiguna Kuncoro",
"MarcAurelio Ranzato",
"Arthur Szlam",
"Jiajun Shen"
] | Large language models (LLM) have become a critical component in many applications of machine learning. However, standard approaches to training LLM require a large number of tightly interconnected accelerators, with devices exchanging gradients and other intermediate states at each optimization step. While it is difficult to build and maintain a single computing cluster hosting many accelerators, it might be easier to find several computing clusters each hosting a smaller number of devices. In this work, we propose a distributed optimization algorithm, Distributed Low-Communication (DiLoCo), that enables training of language models on islands of devices that are poorly connected. The approach is a variant of federated averaging, where the number of inner steps is large, the inner optimizer is AdamW, and the outer optimizer is Nesterov momentum. On the widely used C4 dataset, we show that DiLoCo on 8 workers performs as well as fully synchronous optimization while communicating 500 times less. DiLoCo exhibits great robustness to the data distribution of each worker. It is also robust to resources becoming unavailable over time, and vice versa, it can seamlessly leverage resources that become available during training. | [
"large-scale",
"language modeling",
"distributed learning",
"federated learning"
] | https://openreview.net/pdf?id=pICSfWkJIk | xx4sowgNBx | official_review | 1,718,222,721,014 | pICSfWkJIk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission3/Reviewer_zkbr"
] | title: DiLoCo: Distributed Low-Communication Training of Language Models
summary: This paper focuses on designing a new federated learning mechanism, DiLoCo, to reduce the communication cost. The idea is that clients locally update their model using their own data. Then, after the H steps, local models are aggregated. This process continues until convergence.
strengths: + The research direction, which focuses on reducing communication costs, is interesting.
weaknesses: - Compared to the existing work, the paper's contribution is unclear. Similar work in the context of local SGD [1] and in federated learning setup [2] have been widely studied in the literature.
[1] Stich, Sebastian U. "Local SGD converges fast and communicates little." arXiv preprint arXiv:1805.09767 (2018).
[2] Gao, Hongchang, An Xu, and Heng Huang. "On the convergence of communication-efficient local SGD for federated learning." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 9, pp. 7510-7518. 2021.
....
- The contribution of using Nesterov as the outer optimizer, as suggested by the authors, is incremental.
- The experimental results are limited as only one model and dataset have been considered.
confidence: 4 |
pICSfWkJIk | DiLoCo: Distributed Low-Communication Training of Language Models | [
"Arthur Douillard",
"Qixuan Feng",
"Andrei Alex Rusu",
"Rachita Chhaparia",
"Yani Donchev",
"Adhiguna Kuncoro",
"MarcAurelio Ranzato",
"Arthur Szlam",
"Jiajun Shen"
] | Large language models (LLM) have become a critical component in many applications of machine learning. However, standard approaches to training LLM require a large number of tightly interconnected accelerators, with devices exchanging gradients and other intermediate states at each optimization step. While it is difficult to build and maintain a single computing cluster hosting many accelerators, it might be easier to find several computing clusters each hosting a smaller number of devices. In this work, we propose a distributed optimization algorithm, Distributed Low-Communication (DiLoCo), that enables training of language models on islands of devices that are poorly connected. The approach is a variant of federated averaging, where the number of inner steps is large, the inner optimizer is AdamW, and the outer optimizer is Nesterov momentum. On the widely used C4 dataset, we show that DiLoCo on 8 workers performs as well as fully synchronous optimization while communicating 500 times less. DiLoCo exhibits great robustness to the data distribution of each worker. It is also robust to resources becoming unavailable over time, and vice versa, it can seamlessly leverage resources that become available during training. | [
"large-scale",
"language modeling",
"distributed learning",
"federated learning"
] | https://openreview.net/pdf?id=pICSfWkJIk | pnYQcIwebH | meta_review | 1,718,651,250,563 | pICSfWkJIk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission3/Area_Chair_Mbnf"
] | metareview: The work proposes a method for distributed optimization that utilizes a version of Federated Averaging with the Nesterov momentum as the outer optimizer. Reviewers have appreciated the potential of the research idea, as well as the depth of the empirical analysis of the proposed method. However, they also expressed strong concerns related to the novelty of the approach, the lack of clarity around systems-level data about evaluation, such as the network bandwidth requirements. Furthermore, two reviewers noted that the method is not explored in the context of heterogeneous setups, which makes the contribution of the paper less clear given the existence of gradient accumulation.
Therefore, I cannot recommend this paper for acceptance, but I encourage the authors to revise the manuscript by following the reviewers' suggestions and submit it to another venue.
recommendation: Reject
confidence: 4 |
pICSfWkJIk | DiLoCo: Distributed Low-Communication Training of Language Models | [
"Arthur Douillard",
"Qixuan Feng",
"Andrei Alex Rusu",
"Rachita Chhaparia",
"Yani Donchev",
"Adhiguna Kuncoro",
"MarcAurelio Ranzato",
"Arthur Szlam",
"Jiajun Shen"
] | Large language models (LLM) have become a critical component in many applications of machine learning. However, standard approaches to training LLM require a large number of tightly interconnected accelerators, with devices exchanging gradients and other intermediate states at each optimization step. While it is difficult to build and maintain a single computing cluster hosting many accelerators, it might be easier to find several computing clusters each hosting a smaller number of devices. In this work, we propose a distributed optimization algorithm, Distributed Low-Communication (DiLoCo), that enables training of language models on islands of devices that are poorly connected. The approach is a variant of federated averaging, where the number of inner steps is large, the inner optimizer is AdamW, and the outer optimizer is Nesterov momentum. On the widely used C4 dataset, we show that DiLoCo on 8 workers performs as well as fully synchronous optimization while communicating 500 times less. DiLoCo exhibits great robustness to the data distribution of each worker. It is also robust to resources becoming unavailable over time, and vice versa, it can seamlessly leverage resources that become available during training. | [
"large-scale",
"language modeling",
"distributed learning",
"federated learning"
] | https://openreview.net/pdf?id=pICSfWkJIk | oREAPeIU98 | decision | 1,718,723,695,326 | pICSfWkJIk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: After a thorough evaluation of the paper and the feedback provided by reviewers and a meta-reviewer, we have made the decision to accept the paper. Our decision is motivated by the paper's relevance to one of the core topics of the workshop. The program chairs believe that the paper presents valuable results that are pertinent to the workshop's objectives, and we are eager to foster discussions and research advancements in this area. Please take into account all reviewers' suggested improvements. Congratulations and hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
pICSfWkJIk | DiLoCo: Distributed Low-Communication Training of Language Models | [
"Arthur Douillard",
"Qixuan Feng",
"Andrei Alex Rusu",
"Rachita Chhaparia",
"Yani Donchev",
"Adhiguna Kuncoro",
"MarcAurelio Ranzato",
"Arthur Szlam",
"Jiajun Shen"
] | Large language models (LLM) have become a critical component in many applications of machine learning. However, standard approaches to training LLM require a large number of tightly interconnected accelerators, with devices exchanging gradients and other intermediate states at each optimization step. While it is difficult to build and maintain a single computing cluster hosting many accelerators, it might be easier to find several computing clusters each hosting a smaller number of devices. In this work, we propose a distributed optimization algorithm, Distributed Low-Communication (DiLoCo), that enables training of language models on islands of devices that are poorly connected. The approach is a variant of federated averaging, where the number of inner steps is large, the inner optimizer is AdamW, and the outer optimizer is Nesterov momentum. On the widely used C4 dataset, we show that DiLoCo on 8 workers performs as well as fully synchronous optimization while communicating 500 times less. DiLoCo exhibits great robustness to the data distribution of each worker. It is also robust to resources becoming unavailable over time, and vice versa, it can seamlessly leverage resources that become available during training. | [
"large-scale",
"language modeling",
"distributed learning",
"federated learning"
] | https://openreview.net/pdf?id=pICSfWkJIk | ckbP09VXX8 | official_review | 1,718,094,213,349 | pICSfWkJIk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission3/Reviewer_NHxt"
] | title: Review on DiLoCo
summary: A distributed federated learning strategy, DiLoCo is presented, which achieves high accuracy on LLM training with low communication overhead. In general, the presented concept could be useful for exploiting federated learning infrastructures. But there are several weaknesses in the work which are elaborated below. Primarily, it seems that there is limited novelty in the paper, given that the method is adopted from an already established algorithm. However it is still interesting to see the application of this methodology for an LLM use-case, but it seems that (1) the considered models are quite small and (2) heterogeneity is not explored. If the authors can extend their study to include these two points in a revised manuscript, it could be interesting to see the results. Furthermore, profiling statistics and information on the systems is missing, which makes it difficult to evaluate the performance.
strengths: - Achieves relatively better accuracy with low communication over other investigated benchmarks
- Extensive parametric analysis
- Potential for exploitation of federated learning
weaknesses: The authors are encouraged to work on the suggestions here:
- [line 39] the inner optimizer is replaced with AdamW -- What does AdamW replace?
- [line 44] 'workers need not to communicate at each and every single step but only every H steps which can be in the order of hundreds or even thousands' -- Have you tested gradient accumulation? How is this approach different compared to gradient accumulation, except the heterogeneous architecture possibilities? From the conclusion, it seems heterogeneity is indeed not explored here.
- Although the authors talk about heterogeneous architectures, this is not explored in the paper. This is also mentioned later as a limitation. This is quite a critical limitation, since then the presented methodology could very well be performed with standard gradient accumulation approach.
- The considered model sizes are rather small, given the state of the art in the field. Can the authors provide some results with more parameters? Since this will significantly increase the complexity, the values of H and T in this case will perhaps be quite critical. This is mentioned in the conclusion too, however this is a crucial point for this paper, since the title of the paper is on LLMs.
- Why only consider perplexity? It is not clear how this term is defined and whether there is some kind of normalisation involved. At the moment, the reported perplexity values seem quite close to each other. Hence it is difficult to say if the improvement over benchmarks is indeed significant.
- In general it is difficult to follow what the baselines are? Suggestion is to add a table with all baselines (Baseline 1, 2, etc.) along with information if they are data parallel or serial and with pre-training or not and then use these numbers consistently throughout the document.
- Table 2: The information in this table is rather ambiguous. What does 'Baseline, 8x updates' mean? What is 'microbatching' in this table? I am not sure if this is the correct word here, since I think the authors mean a smaller batch size? If it is indeed smaller batch size, that would mean there is lower GPU utilisation. Have the authors investigated parameters, such as GPU utilisation? In general, such profiling statistics should be added to this investigation.
Also the values in this table assumes the performance to be ideal (1x, 8x etc.). However in practise, this is hardware-dependent. The authors should provide a background on the systems and provide the real compute/communication times obtained through profiling the code.
- Table 3: It is rather odd that the accuracy is not influenced to large extent by changing the number of workers. This could be an influence of the model itself. The authors talk about three models used in this study in Tab. 1. Which of these configurations is used for this result? It would be informative to see the effect of this with even larger model.
- Table 4: What are Relative (%) and Absolute (PPL)? - Please define these quantities.
- [line 294] Larger models are more efficient at fitting the same amount of data - What is efficiency here?
- Conclusion: It is not clear where the "low bandwidth" is addressed in this paper. Authors should write about the bandwidth under which the results are obtained, and how this impacts the comparison to other approaches.
confidence: 4 |
pICSfWkJIk | DiLoCo: Distributed Low-Communication Training of Language Models | [
"Arthur Douillard",
"Qixuan Feng",
"Andrei Alex Rusu",
"Rachita Chhaparia",
"Yani Donchev",
"Adhiguna Kuncoro",
"MarcAurelio Ranzato",
"Arthur Szlam",
"Jiajun Shen"
] | Large language models (LLM) have become a critical component in many applications of machine learning. However, standard approaches to training LLM require a large number of tightly interconnected accelerators, with devices exchanging gradients and other intermediate states at each optimization step. While it is difficult to build and maintain a single computing cluster hosting many accelerators, it might be easier to find several computing clusters each hosting a smaller number of devices. In this work, we propose a distributed optimization algorithm, Distributed Low-Communication (DiLoCo), that enables training of language models on islands of devices that are poorly connected. The approach is a variant of federated averaging, where the number of inner steps is large, the inner optimizer is AdamW, and the outer optimizer is Nesterov momentum. On the widely used C4 dataset, we show that DiLoCo on 8 workers performs as well as fully synchronous optimization while communicating 500 times less. DiLoCo exhibits great robustness to the data distribution of each worker. It is also robust to resources becoming unavailable over time, and vice versa, it can seamlessly leverage resources that become available during training. | [
"large-scale",
"language modeling",
"distributed learning",
"federated learning"
] | https://openreview.net/pdf?id=pICSfWkJIk | ZxFeMuQpmE | official_review | 1,718,288,499,109 | pICSfWkJIk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission3/Reviewer_xYuj"
] | title: Promising technique for training large models on poorly connected clusters of accelarators
summary: The authors proposed a distributed optimization algorithm (DiLoCo) to train language models on various islands of devices which do not have high-bandwidth inter-connectivity. The method is similar to traditional data parallelism with limited communication. The computational resources are split in $k$ workers or islands. Each island corresponds to a set of closely connected accelerators. Data are split in $k$ shards, so to have a shard per worker. Each worker receives a replica of the model, which is trained for $H$ optimization steps before synchronizing with the other workers. The fact that $H\gg1$ alleviates the need for high-bandwidth communication between the workers.
The authors test their approach by training decoder-only transformers adapted from the Chinchilla architecture on C4 dataset. They also provide an extensive ablation study. The results show that the method outperforms baselines when measuring the perplexity of the model. Baselines consist in increasing the batch size or increasing the number of optimization steps. The method is also reported to converge faster in terms of overall training time.
The paper presents a simple, elegant, and promising method to help with the training of large language models in low-bandwidth communication settings. It is well aligned with the workshop topics. I recommend for accepting the paper.
strengths: - The paper is clear, well organized and written;
- The idea is simple and easy to follow;
- Many experiments investigate different points of the approach (communication frequency, data regimes, number of replicas);
- The framework is elastic and can adapt to variation in the number of workers;
- Authors are aware of some important limitations.
weaknesses: Even if it is already mentioned in the limitations by the authors, the fact that DiLoCo handles heterogeneous workers would be indeed much welcome. If one worker goes much faster than the others, it could remain idle most of the training.
It might be a point that I missed, but it would be interesting to compare DiLoCo with traditional data parallelism, especially in terms of latency and overall training time.
The core idea of DiLoCo is to enable training on low-bandwidth communication. It would be valuable to provide a lower bound, even rough one, about the communication bandwidth respective to the model size. Indeed, is it possible that the model may be so large, and communication so slow that outer steps take too much time for a realistic training. Providing figures would help appreciating the usefulness of DiLoCo in real settings.
More generally regarding network, what are the requirements for DiLoCo? Is the connection between workers on different location totally transparent to DiLoCo? Otherwise, how are network issues mitigated (like lost packets or dropped connections)?
confidence: 4 |
nYyCgHzStF | Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates | [
"Cristian Meo",
"Ksenia Sycheva",
"Anirudh Goyal",
"Justin Dauwels"
] | It is a common practice in natural language processing to pre-train a single model on a general domain and then fine-tune it for downstream tasks. However, when it comes to Large Language Models, fine-tuning the entire model can be computationally expensive, resulting in very intensive energy consumption. As a result, several Parameter Efficient Fine-Tuning (PEFT) approaches were recently proposed. One of the most popular approaches is low-rank adaptation (LoRA), where the key insight is decomposing the update weights of the pre-trained model into two low-rank matrices. However, the proposed approaches either use the same rank value across all different weight matrices, which has been shown to be a sub-optimal choice, or do not use any quantization technique, one of the most important factors when it comes to a model's energy consumption. In this work, we propose Bayesian-LoRA which approaches low-rank adaptation and quantization from a Bayesian perspective by employing a prior distribution on both quantization levels and rank values. As a result, B-LoRA is able to fine-tune a pre-trained model on a specific downstream task, finding the optimal rank values and quantization levels for every low-rank matrix. We validate the proposed model by fine-tuning a pre-trained DeBERTaV3 on the GLUE benchmark. Moreover, we compare it to relevant baselines and present both qualitative and quantitative results, showing how the proposed approach is able to learn optimal-rank quantized matrices. B-LoRA performs on par with or better than the baselines while reducing the total number of bit operations by roughly 70\% compared to the baseline methods. | [
"Parameter Efficient Fine-Tuning",
"Low-Rank Adaptation",
"Quantization"
] | https://openreview.net/pdf?id=nYyCgHzStF | qaMKMMR5Ku | official_review | 1,718,308,061,903 | nYyCgHzStF | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission43/Reviewer_3sHm"
] | title: Sound and instructive work; limited experiments.
summary: The author used well-known bayesian tools and tricks in order to build adaptive quantization and low-rank model adaptation with adaptive rank. The work is easy to follow, clear, instructive.
strengths: + Despite the fact the work gives new perspective on the old problem,
cherry-picked results used in the paper are not novel at all. The work is
comprised of multiple techniques proposed in prior works (quantization via
Bernoulli RV and their practical parametrizations). However, I believe this
the work is sound and instructive.
weaknesses: ## Typesetting
+ Bibliography should be reviewed and actualized: capitalization of titles,
missing publication dates, journals conferences, etc (BitFit paper published
at ACL conference https://aclanthohttps://aclanthology.org/2022.acl-short.1logy.org/2022.acl-short.1).
+ Missing table of content in hypertext markup.
## Major Points of Criticism
### Method
+ The bayesian perspective to LoRA is quite novel at the time. However,
competitive works [1,2] has appeared recently.
[1]: https://openreview.net/forum?id=FJiUyzOF1m
[2]: https://openreview.net/forum?id=LZrCBQBCzl
### Experiments
+ It's great that the authors tried to make a holistic empirical study of
proposed method and adopted experimentation protocol from (Hu et al, 2020).
However, their comparison is not complete since their experiments are carried
out only on DeBERTaV3-base. This fact limits contributions since experiments
do not say anything about method scaling with model size increases. I believe
that authors should experiment at least with DeBERTaV3-medium in order to
fill the gap.
+ Lack of performance evaluation regarding training time is another
flaw of the paper.
confidence: 5 |
nYyCgHzStF | Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates | [
"Cristian Meo",
"Ksenia Sycheva",
"Anirudh Goyal",
"Justin Dauwels"
] | It is a common practice in natural language processing to pre-train a single model on a general domain and then fine-tune it for downstream tasks. However, when it comes to Large Language Models, fine-tuning the entire model can be computationally expensive, resulting in very intensive energy consumption. As a result, several Parameter Efficient Fine-Tuning (PEFT) approaches were recently proposed. One of the most popular approaches is low-rank adaptation (LoRA), where the key insight is decomposing the update weights of the pre-trained model into two low-rank matrices. However, the proposed approaches either use the same rank value across all different weight matrices, which has been shown to be a sub-optimal choice, or do not use any quantization technique, one of the most important factors when it comes to a model's energy consumption. In this work, we propose Bayesian-LoRA which approaches low-rank adaptation and quantization from a Bayesian perspective by employing a prior distribution on both quantization levels and rank values. As a result, B-LoRA is able to fine-tune a pre-trained model on a specific downstream task, finding the optimal rank values and quantization levels for every low-rank matrix. We validate the proposed model by fine-tuning a pre-trained DeBERTaV3 on the GLUE benchmark. Moreover, we compare it to relevant baselines and present both qualitative and quantitative results, showing how the proposed approach is able to learn optimal-rank quantized matrices. B-LoRA performs on par with or better than the baselines while reducing the total number of bit operations by roughly 70\% compared to the baseline methods. | [
"Parameter Efficient Fine-Tuning",
"Low-Rank Adaptation",
"Quantization"
] | https://openreview.net/pdf?id=nYyCgHzStF | hoMxtkIQiJ | meta_review | 1,718,422,985,544 | nYyCgHzStF | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission43/Area_Chair_bNek"
] | metareview: The paper revisits well-known Bayesian tools and tricks to build adaptive quantization and low-rank model adaptation with adaptive rank.
The reviewers acknowledge the technical novelty and insights of the current manuscript, but all (including the AC) criticize the soundness of the empirical evaluation. The AC would suggest including results on more tasks and neural architectures in the future manuscript.
recommendation: Accept (Poster)
confidence: 3 |
nYyCgHzStF | Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates | [
"Cristian Meo",
"Ksenia Sycheva",
"Anirudh Goyal",
"Justin Dauwels"
] | It is a common practice in natural language processing to pre-train a single model on a general domain and then fine-tune it for downstream tasks. However, when it comes to Large Language Models, fine-tuning the entire model can be computationally expensive, resulting in very intensive energy consumption. As a result, several Parameter Efficient Fine-Tuning (PEFT) approaches were recently proposed. One of the most popular approaches is low-rank adaptation (LoRA), where the key insight is decomposing the update weights of the pre-trained model into two low-rank matrices. However, the proposed approaches either use the same rank value across all different weight matrices, which has been shown to be a sub-optimal choice, or do not use any quantization technique, one of the most important factors when it comes to a model's energy consumption. In this work, we propose Bayesian-LoRA which approaches low-rank adaptation and quantization from a Bayesian perspective by employing a prior distribution on both quantization levels and rank values. As a result, B-LoRA is able to fine-tune a pre-trained model on a specific downstream task, finding the optimal rank values and quantization levels for every low-rank matrix. We validate the proposed model by fine-tuning a pre-trained DeBERTaV3 on the GLUE benchmark. Moreover, we compare it to relevant baselines and present both qualitative and quantitative results, showing how the proposed approach is able to learn optimal-rank quantized matrices. B-LoRA performs on par with or better than the baselines while reducing the total number of bit operations by roughly 70\% compared to the baseline methods. | [
"Parameter Efficient Fine-Tuning",
"Low-Rank Adaptation",
"Quantization"
] | https://openreview.net/pdf?id=nYyCgHzStF | H7JXswRuLT | decision | 1,718,650,442,813 | nYyCgHzStF | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
nYyCgHzStF | Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates | [
"Cristian Meo",
"Ksenia Sycheva",
"Anirudh Goyal",
"Justin Dauwels"
] | It is a common practice in natural language processing to pre-train a single model on a general domain and then fine-tune it for downstream tasks. However, when it comes to Large Language Models, fine-tuning the entire model can be computationally expensive, resulting in very intensive energy consumption. As a result, several Parameter Efficient Fine-Tuning (PEFT) approaches were recently proposed. One of the most popular approaches is low-rank adaptation (LoRA), where the key insight is decomposing the update weights of the pre-trained model into two low-rank matrices. However, the proposed approaches either use the same rank value across all different weight matrices, which has been shown to be a sub-optimal choice, or do not use any quantization technique, one of the most important factors when it comes to a model's energy consumption. In this work, we propose Bayesian-LoRA which approaches low-rank adaptation and quantization from a Bayesian perspective by employing a prior distribution on both quantization levels and rank values. As a result, B-LoRA is able to fine-tune a pre-trained model on a specific downstream task, finding the optimal rank values and quantization levels for every low-rank matrix. We validate the proposed model by fine-tuning a pre-trained DeBERTaV3 on the GLUE benchmark. Moreover, we compare it to relevant baselines and present both qualitative and quantitative results, showing how the proposed approach is able to learn optimal-rank quantized matrices. B-LoRA performs on par with or better than the baselines while reducing the total number of bit operations by roughly 70\% compared to the baseline methods. | [
"Parameter Efficient Fine-Tuning",
"Low-Rank Adaptation",
"Quantization"
] | https://openreview.net/pdf?id=nYyCgHzStF | D2uoCfm8TO | official_review | 1,718,249,660,033 | nYyCgHzStF | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission43/Reviewer_HMjZ"
] | title: Bayesian-LoRA is interesting but needs broader validation
summary: The paper proposes Bayesian-LoRA (B-LoRA), a novel method for parameter-efficient fine-tuning of large language models. B-LoRA integrates Bayesian approaches for optimizing quantization levels and rank values during the fine-tuning process, aiming to reduce computational costs and energy consumption while maintaining or improving model performance.
strengths: B-LoRA introduces a novel integration of Bayesian methods for optimizing both quantization levels and rank values, providing a new dimension to parameter-efficient fine-tuning.
Unlike some existing methods, B-LoRA does not require extensive hyperparameter searches, making it more user-friendly and adaptable.
weaknesses: The paper does not clearly quantify the impact of B-LoRA on training time compared to other methods. The additional complexity of Bayesian optimization might increase training duration, which is a crucial factor for practical applications.
The evaluation primarily focuses on DeBERTaV3 and does not explore other pre-trained models like GPT-3, limiting the generalizability of the results.
confidence: 3 |
mT3PdRKl40 | Asynchronous Local-SGD Training for Language Modeling | [
"Bo Liu",
"Rachita Chhaparia",
"Arthur Douillard",
"Satyen Kale",
"Andrei Alex Rusu",
"Jiajun Shen",
"Arthur Szlam",
"MarcAurelio Ranzato"
] | Local stochastic gradient descent (Local-SGD), also referred to as federated averaging, is an approach to distributed optimization where each device performs more than one SGD update per communication.
This work presents an empirical study of asynchronous Local-SGD for training language models; that is, each worker updates the global parameters as soon as it has finished its SGD steps. We conduct a comprehensive investigation by examining how worker hardware heterogeneity, model size, number of workers, and optimizer could impact the learning performance. We find that with naive implementations, asynchronous Local-SGD takes more iterations to converge than its synchronous counterpart despite updating the (global) model parameters more frequently. We identify momentum acceleration on the global parameters when worker gradients are stale as a key challenge. We propose a novel method that utilizes a delayed Nesterov momentum update and adjusts the workers' local training steps based on their computation speed. This approach, evaluated with models up to 150M parameters on the C4 dataset, matches the performance of synchronous Local-SGD in terms of perplexity per update step, and significantly surpasses it in terms of wall clock time. | [
"Asynchronous Local-SGD training for Language Modeling"
] | https://openreview.net/pdf?id=mT3PdRKl40 | qMS73rfneJ | official_review | 1,718,353,499,812 | mT3PdRKl40 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission33/Reviewer_gr3e"
] | title: Extensive study of async gradient optimization; solid experimentation and ablation study.
summary: The paper extensively study various pairs of local and global optimizer in async gradient training setup. The main empirical observation is that a momentum itself, its updates, and relations between momentum in inner and outer optimizers have a crucial role on convergence speed. The authors proposed two solutions to "synchronize" momenta and carried out solid experiments and ablations study to support authors' findings.
strengths: + Interesting empirical observation which facilitates and motivates further research in this area.
+ Solid experiments and ablation study.
weaknesses: + Missing theoretical analysis. However, I thinks that it is minor drawback and is promising direction for further research.
+ Template placeholder in the page header.
+ Bibliography should be reviewed and actualized: capitalization of titles,
missing publication dates, journals conferences, etc (the Petals paper
published at ACL https://aclanthology.org/2023.acl-demo.54/).
+ Missing table of content in hypertext markup.
confidence: 4 |
mT3PdRKl40 | Asynchronous Local-SGD Training for Language Modeling | [
"Bo Liu",
"Rachita Chhaparia",
"Arthur Douillard",
"Satyen Kale",
"Andrei Alex Rusu",
"Jiajun Shen",
"Arthur Szlam",
"MarcAurelio Ranzato"
] | Local stochastic gradient descent (Local-SGD), also referred to as federated averaging, is an approach to distributed optimization where each device performs more than one SGD update per communication.
This work presents an empirical study of asynchronous Local-SGD for training language models; that is, each worker updates the global parameters as soon as it has finished its SGD steps. We conduct a comprehensive investigation by examining how worker hardware heterogeneity, model size, number of workers, and optimizer could impact the learning performance. We find that with naive implementations, asynchronous Local-SGD takes more iterations to converge than its synchronous counterpart despite updating the (global) model parameters more frequently. We identify momentum acceleration on the global parameters when worker gradients are stale as a key challenge. We propose a novel method that utilizes a delayed Nesterov momentum update and adjusts the workers' local training steps based on their computation speed. This approach, evaluated with models up to 150M parameters on the C4 dataset, matches the performance of synchronous Local-SGD in terms of perplexity per update step, and significantly surpasses it in terms of wall clock time. | [
"Asynchronous Local-SGD training for Language Modeling"
] | https://openreview.net/pdf?id=mT3PdRKl40 | lmtQORUByN | official_review | 1,718,199,279,529 | mT3PdRKl40 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission33/Reviewer_Z83L"
] | title: Important work for increasing the performance the of asynchronous fine-tuning of LLMs but missing reproducibility
summary: The present work presents an asynchronous Local-SGD variant, that is helpful in distributed training settings of LLMs where communication between workers and heterogeneity in terms of computation speed of workers become the bottleneck. Empirical evaluations show that plain asynchronous Local-SGD variants converge slower than their synchronous counterparts. To address this problem, the paper introduces two momentum adaption techniques. One is based on strategically scheduling the updates to the momentum parameters while the other proposes an adjustment of the number of local training steps in relation to the speed of the worker. The results show the proposed method in most cases achieves better performance across different model sizes and number of workers, compared to related baselines.
strengths: - The idea to strategically delay the Nestov momentum updates in combination with an adjustment of the speed of the worker is novel and interesting, as it combines aspects of the computational efficiency (speed of the device) with aspects of the learning process (optimizer updates)
- The empirical evaluations show the method to outperform the DiLoCo baseline in most cases
- Different ablation studies in regard to model size, number of workers, and (hyper-) parameters of the momentum scheduling bolster the findings
- The idea to assign a learning rate scheduler to each local data shard is good.
- Results improve with the level of heterogeneity of the workers (Table 1), which is promising
weaknesses: - The focus of this work is on using the proposed method for the fine-tuning of LLMs. However, the largest potential for asynchronous SGD methods is to be realized during pretraining
- Evaluations are only performed up to a maximum number of 16 workers, which might be too small to see large benefits of asynchronous training settings
confidence: 4
limitations: - As the authors also state in their limitations section, the benefits of using the Local-SGD approach begin to decline at larger scale, which can become an issue when moving to larger datasets in the future.
- The authors only release toy-problem code of their work, so the reproducibility of the results is very limited
suggestions: - If possible, evaluate the performance on more workers
- Release of the source code to foster adaption in the community and replication of the results |
mT3PdRKl40 | Asynchronous Local-SGD Training for Language Modeling | [
"Bo Liu",
"Rachita Chhaparia",
"Arthur Douillard",
"Satyen Kale",
"Andrei Alex Rusu",
"Jiajun Shen",
"Arthur Szlam",
"MarcAurelio Ranzato"
] | Local stochastic gradient descent (Local-SGD), also referred to as federated averaging, is an approach to distributed optimization where each device performs more than one SGD update per communication.
This work presents an empirical study of asynchronous Local-SGD for training language models; that is, each worker updates the global parameters as soon as it has finished its SGD steps. We conduct a comprehensive investigation by examining how worker hardware heterogeneity, model size, number of workers, and optimizer could impact the learning performance. We find that with naive implementations, asynchronous Local-SGD takes more iterations to converge than its synchronous counterpart despite updating the (global) model parameters more frequently. We identify momentum acceleration on the global parameters when worker gradients are stale as a key challenge. We propose a novel method that utilizes a delayed Nesterov momentum update and adjusts the workers' local training steps based on their computation speed. This approach, evaluated with models up to 150M parameters on the C4 dataset, matches the performance of synchronous Local-SGD in terms of perplexity per update step, and significantly surpasses it in terms of wall clock time. | [
"Asynchronous Local-SGD training for Language Modeling"
] | https://openreview.net/pdf?id=mT3PdRKl40 | jopEIXpOE3 | decision | 1,718,722,704,652 | mT3PdRKl40 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Oral)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
mT3PdRKl40 | Asynchronous Local-SGD Training for Language Modeling | [
"Bo Liu",
"Rachita Chhaparia",
"Arthur Douillard",
"Satyen Kale",
"Andrei Alex Rusu",
"Jiajun Shen",
"Arthur Szlam",
"MarcAurelio Ranzato"
] | Local stochastic gradient descent (Local-SGD), also referred to as federated averaging, is an approach to distributed optimization where each device performs more than one SGD update per communication.
This work presents an empirical study of asynchronous Local-SGD for training language models; that is, each worker updates the global parameters as soon as it has finished its SGD steps. We conduct a comprehensive investigation by examining how worker hardware heterogeneity, model size, number of workers, and optimizer could impact the learning performance. We find that with naive implementations, asynchronous Local-SGD takes more iterations to converge than its synchronous counterpart despite updating the (global) model parameters more frequently. We identify momentum acceleration on the global parameters when worker gradients are stale as a key challenge. We propose a novel method that utilizes a delayed Nesterov momentum update and adjusts the workers' local training steps based on their computation speed. This approach, evaluated with models up to 150M parameters on the C4 dataset, matches the performance of synchronous Local-SGD in terms of perplexity per update step, and significantly surpasses it in terms of wall clock time. | [
"Asynchronous Local-SGD training for Language Modeling"
] | https://openreview.net/pdf?id=mT3PdRKl40 | EjYs0g3YjN | official_review | 1,718,061,950,261 | mT3PdRKl40 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission33/Reviewer_Z4Dd"
] | title: Promising Approach for Asynchronous Local-SGD for Language Model Pre-Training
summary: This paper explores the use of asynchronous Local-SGD methods for training language models, integrating two novel techniques that outperform other tested asynchronous Local-SGD experiments: Delayed Nesterov momentum updates (DN) and Dynamic Local Updates (DyLU). They test their approach across various ablations for different techniques, model sizes ranging from 20M to 150M parameters, and a varying number of heterogeneous workers from 4 to 16.
strengths: - The paper includes comprehensive ablation studies with different asynchronous Local-SGD techniques.
- The introduction of Delayed Nesterov momentum updates (DN) and Dynamic Local Updates (DyLU) presents a novel approach to improving asynchronous Local-SGD performance.
weaknesses: - The methods were only demonstrated to scale up to models with 150 million parameters, raising questions about the scalability of the proposed techniques for larger models.
confidence: 3
limitations: - A minimal toy example of the algorithm is provided, but there is no open-source implementation for running the main experiments of the paper, limiting the reproducibility of the results.
suggestions: - Adding details about which AI accelerators the models were trained on.
- It would be great in future work, if sufficient compute resources are available, to test this promising method on larger models to evaluate its scalability beyond 150 million parameters. |
lvmjTZQhRk | TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones | [
"Zhengqing Yuan",
"Zhaoxu Li",
"Weiran Huang",
"Yanfang Ye",
"Lichao Sun"
] | In recent years, multimodal large language models (MLLMs) such as GPT-4V have demonstrated remarkable advancements, excelling in a variety of vision-language tasks. Despite their prowess, the closed-source nature and computational demands of such models limit their accessibility and applicability. This study introduces TinyGPT-V, a novel open-source MLLM, designed for efficient training and inference across various vision-language tasks, including image captioning (IC) and visual question answering (VQA). Leveraging a compact yet powerful architecture, TinyGPT-V integrates the Phi-2 language model with pre-trained vision encoders, utilizing a unique mapping module for visual and linguistic information fusion. With a training regimen optimized for small backbones and employing a diverse dataset amalgam, TinyGPT-V requires significantly lower computational resources—24GB for training and as little as 8GB for inference—without compromising on performance. Our experiments demonstrate that TinyGPT-V, with its language model 2.8 billion parameters, achieves comparable results in VQA and image inference tasks to its larger counterparts while being uniquely suited for deployment on resource-constrained devices through innovative quantization techniques. This work not only paves the way for more accessible and efficient MLLMs but also underscores the potential of smaller, optimized models in bridging the gap between high performance and computational efficiency in real-world applications. Additionally, this paper introduces a new approach to multimodal large language models using smaller backbones. Our code and training weights are available in the supplementary material. | [
"Multi-modal Large Language Model(MLLM); Vision Language Model(VLM); Computing Efficiency;"
] | https://openreview.net/pdf?id=lvmjTZQhRk | u64UJbXNqc | official_review | 1,718,396,292,579 | lvmjTZQhRk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission38/Reviewer_kHHA"
] | title: TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
summary: This paper presents TinyGPT-V, an open source parameter-efficient MLLM multimodal language model designed for vision-language tasks. Built on the Phi-2 small language model framework, TinyGPT-V is better in lots of benchmarks, such as visual question-answering and referring expression comprehension, while maintaining low computational needs and being open source. It can be trained on a 24gb GPU and deployed on an 8gb device. They also explained in detail their training technology, which uses a four-stage training process, including a language model and a visual encoder.
strengths: * The paper is easy to understand and follow
* They explained the method so that people would be able to replicate it.
* Method is an open source.
* By using ViT and Phi2 models with frozen weights and training only the component arrangements (linear layers and LoRA parameters) the system becomes highly flexible for future applications.
weaknesses: * The model's smaller scale, while it is great for computational efficiency, may limit its ability to handle highly complex or nuanced tasks that larger models could manage potentially using the same training procedure as this paper suggests.
confidence: 3
limitations: * TinyGPT-V's testing focuses on specific benchmarks, which may not fully represent its performance in diverse real-world scenarios.
suggestions: It would be good to test not only Phi-2 but also larger and more powerful language models. While Phi-2 performs well in tests, it might not do as well in real-life situations. Comparing it with bigger models could give us a better idea of how they work in everyday conversations. |
lvmjTZQhRk | TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones | [
"Zhengqing Yuan",
"Zhaoxu Li",
"Weiran Huang",
"Yanfang Ye",
"Lichao Sun"
] | In recent years, multimodal large language models (MLLMs) such as GPT-4V have demonstrated remarkable advancements, excelling in a variety of vision-language tasks. Despite their prowess, the closed-source nature and computational demands of such models limit their accessibility and applicability. This study introduces TinyGPT-V, a novel open-source MLLM, designed for efficient training and inference across various vision-language tasks, including image captioning (IC) and visual question answering (VQA). Leveraging a compact yet powerful architecture, TinyGPT-V integrates the Phi-2 language model with pre-trained vision encoders, utilizing a unique mapping module for visual and linguistic information fusion. With a training regimen optimized for small backbones and employing a diverse dataset amalgam, TinyGPT-V requires significantly lower computational resources—24GB for training and as little as 8GB for inference—without compromising on performance. Our experiments demonstrate that TinyGPT-V, with its language model 2.8 billion parameters, achieves comparable results in VQA and image inference tasks to its larger counterparts while being uniquely suited for deployment on resource-constrained devices through innovative quantization techniques. This work not only paves the way for more accessible and efficient MLLMs but also underscores the potential of smaller, optimized models in bridging the gap between high performance and computational efficiency in real-world applications. Additionally, this paper introduces a new approach to multimodal large language models using smaller backbones. Our code and training weights are available in the supplementary material. | [
"Multi-modal Large Language Model(MLLM); Vision Language Model(VLM); Computing Efficiency;"
] | https://openreview.net/pdf?id=lvmjTZQhRk | nIOwS9s2KC | meta_review | 1,718,705,655,440 | lvmjTZQhRk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission38/Area_Chair_yr6z"
] | metareview: **Strengths**
- TinyGPT-V reduces the size and costs of MLLMs thus increasing the availability of this important technology.
- The paper is well written, and methods well described fostering reproducibility.
- TinyGPT-V is open sourced and thus useful for the broad community.
**Weaknesses**
- It would be useful to discuss TinyGPT-V limitations, where it is inferior to larger MLLMs.
- Evaluation should include a wider range of benchmarks and tasks.
**Summary**
This is a timely and important work for the community.
recommendation: Accept (Oral)
confidence: 5 |
lvmjTZQhRk | TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones | [
"Zhengqing Yuan",
"Zhaoxu Li",
"Weiran Huang",
"Yanfang Ye",
"Lichao Sun"
] | In recent years, multimodal large language models (MLLMs) such as GPT-4V have demonstrated remarkable advancements, excelling in a variety of vision-language tasks. Despite their prowess, the closed-source nature and computational demands of such models limit their accessibility and applicability. This study introduces TinyGPT-V, a novel open-source MLLM, designed for efficient training and inference across various vision-language tasks, including image captioning (IC) and visual question answering (VQA). Leveraging a compact yet powerful architecture, TinyGPT-V integrates the Phi-2 language model with pre-trained vision encoders, utilizing a unique mapping module for visual and linguistic information fusion. With a training regimen optimized for small backbones and employing a diverse dataset amalgam, TinyGPT-V requires significantly lower computational resources—24GB for training and as little as 8GB for inference—without compromising on performance. Our experiments demonstrate that TinyGPT-V, with its language model 2.8 billion parameters, achieves comparable results in VQA and image inference tasks to its larger counterparts while being uniquely suited for deployment on resource-constrained devices through innovative quantization techniques. This work not only paves the way for more accessible and efficient MLLMs but also underscores the potential of smaller, optimized models in bridging the gap between high performance and computational efficiency in real-world applications. Additionally, this paper introduces a new approach to multimodal large language models using smaller backbones. Our code and training weights are available in the supplementary material. | [
"Multi-modal Large Language Model(MLLM); Vision Language Model(VLM); Computing Efficiency;"
] | https://openreview.net/pdf?id=lvmjTZQhRk | cCWLc8j0OQ | official_review | 1,718,299,699,144 | lvmjTZQhRk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission38/Reviewer_Nbce"
] | title: Replacing LLMs with smaller ones with increased training
summary: This paper explains a method to replace the large Vicuna-13B model with Phi-2 (2.8B) model. The authors propose to do this by adding a few previously-known components to the MLLM and training the model for longer.
strengths: 1. Clear objective and methodology: The authors have a single objective - to reduce the size of LLM in the overall model. The authors focus their attention around this itself and do the needful (training for longer and using pretrained components) for the same.
2. Usage of pretrained components from different models: The authors employ different pretrained components from BLIP-2, which is a much larger pretrained model. Intuitively, this helps to transfer the knowledge from the larger models trained on larger corpora to the smaller setup.
3. Clear description of additional components and training setup: The authors describe and analyze the training setup sufficiently.
weaknesses: 1. Despite the success of this proposed framework, the added components are elaborate and complicated. Ablations with simple components, like just a single MLP, would have justified the usage of these additional components more.
2. Large amounts of training data and substantially longer training routines were used in this work. This is not directly a weakness and is partially justified due to the usage of a smaller LLM, however, a more stringent usage of data would have made this a more favorable framework for researchers.
confidence: 4
limitations: 1. Large amounts of data shadow the effectiveness of the smaller LLM.
2. Diagrams could have been better, since they seem tacky and can be improved.
suggestions: 1. A strong analysis of the individual effectiveness of each training dataset could significantly improve the confidence in the paper.
2. The authors could optionally explore distillation-like methods.
3. The authors could explore a simpler training setup.
The current draft is a good one, and can be accepted in its current condition. The above are just suggestions to make the draft better. |
lvmjTZQhRk | TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones | [
"Zhengqing Yuan",
"Zhaoxu Li",
"Weiran Huang",
"Yanfang Ye",
"Lichao Sun"
] | In recent years, multimodal large language models (MLLMs) such as GPT-4V have demonstrated remarkable advancements, excelling in a variety of vision-language tasks. Despite their prowess, the closed-source nature and computational demands of such models limit their accessibility and applicability. This study introduces TinyGPT-V, a novel open-source MLLM, designed for efficient training and inference across various vision-language tasks, including image captioning (IC) and visual question answering (VQA). Leveraging a compact yet powerful architecture, TinyGPT-V integrates the Phi-2 language model with pre-trained vision encoders, utilizing a unique mapping module for visual and linguistic information fusion. With a training regimen optimized for small backbones and employing a diverse dataset amalgam, TinyGPT-V requires significantly lower computational resources—24GB for training and as little as 8GB for inference—without compromising on performance. Our experiments demonstrate that TinyGPT-V, with its language model 2.8 billion parameters, achieves comparable results in VQA and image inference tasks to its larger counterparts while being uniquely suited for deployment on resource-constrained devices through innovative quantization techniques. This work not only paves the way for more accessible and efficient MLLMs but also underscores the potential of smaller, optimized models in bridging the gap between high performance and computational efficiency in real-world applications. Additionally, this paper introduces a new approach to multimodal large language models using smaller backbones. Our code and training weights are available in the supplementary material. | [
"Multi-modal Large Language Model(MLLM); Vision Language Model(VLM); Computing Efficiency;"
] | https://openreview.net/pdf?id=lvmjTZQhRk | FNaUtNHXVc | decision | 1,718,721,929,919 | lvmjTZQhRk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
lvmjTZQhRk | TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones | [
"Zhengqing Yuan",
"Zhaoxu Li",
"Weiran Huang",
"Yanfang Ye",
"Lichao Sun"
] | In recent years, multimodal large language models (MLLMs) such as GPT-4V have demonstrated remarkable advancements, excelling in a variety of vision-language tasks. Despite their prowess, the closed-source nature and computational demands of such models limit their accessibility and applicability. This study introduces TinyGPT-V, a novel open-source MLLM, designed for efficient training and inference across various vision-language tasks, including image captioning (IC) and visual question answering (VQA). Leveraging a compact yet powerful architecture, TinyGPT-V integrates the Phi-2 language model with pre-trained vision encoders, utilizing a unique mapping module for visual and linguistic information fusion. With a training regimen optimized for small backbones and employing a diverse dataset amalgam, TinyGPT-V requires significantly lower computational resources—24GB for training and as little as 8GB for inference—without compromising on performance. Our experiments demonstrate that TinyGPT-V, with its language model 2.8 billion parameters, achieves comparable results in VQA and image inference tasks to its larger counterparts while being uniquely suited for deployment on resource-constrained devices through innovative quantization techniques. This work not only paves the way for more accessible and efficient MLLMs but also underscores the potential of smaller, optimized models in bridging the gap between high performance and computational efficiency in real-world applications. Additionally, this paper introduces a new approach to multimodal large language models using smaller backbones. Our code and training weights are available in the supplementary material. | [
"Multi-modal Large Language Model(MLLM); Vision Language Model(VLM); Computing Efficiency;"
] | https://openreview.net/pdf?id=lvmjTZQhRk | FDhB7q9mWy | official_review | 1,718,311,087,654 | lvmjTZQhRk | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission38/Reviewer_1ky8"
] | title: Review of TinyGPT-V paper
summary: Paper introduces TinyGPT-V, a new, open-source model designed to be efficient and effective for vision-language tasks. TinyGPT-V demonstrates that smaller, optimized models can be both high-performing and resource-efficient, making advanced AI more accessible. This study highlights the potential of using smaller model architectures in developing efficient multimodal large language models.
strengths: All sections are well-structured with enough information. The abstract provides a concise summary of the paper's objectives, methods, results, and significance, which is essential for scientific papers. The methodology section includes sufficient technical details, making it reproducible for other researchers, which is a crucial aspect of scientific research. The inclusion of figures and tables enhances the clarity of the results and supports the claims made in the paper. The conclusion summarizes the findings, emphasizes the significance of the work, and suggests potential future directions.
weaknesses: 1) The paper claims that TinyGPT-V achieves comparable results to larger models but does not provide a detailed comparative analysis with specific benchmarks or metrics for all competing models. It would be beneficial to see more thorough comparisons with the most relevant existing models, including any edge cases where TinyGPT-V might not perform as well.
2) The paper mentions various tasks (e.g., image captioning, visual question answering) but does not provide a deep dive into the model's performance across a wide range of tasks. There should be more evidence on how well TinyGPT-V generalizes across different types of vision-language tasks.
confidence: 5 |
l6COqSWzi9 | Multi-objective Differentiable Neural Architecture Search | [
"Rhea Sanjay Sukthanker",
"Arber Zela",
"Benedikt Staffler",
"Samuel Dooley",
"Josif Grabocka",
"Frank Hutter"
] | Pareto front profiling in multi-objective optimization (MOO), i.e. finding a diverse set of Pareto optimal solutions, is challenging, especially with expensive objectives like neural network training. Typically, in MOO neural architecture search (NAS), we aim to balance performance and hardware metrics across devices. Prior NAS approaches simplify this task by incorporating hardware constraints into the objective function, but profiling the Pareto front necessitates a computationally expensive search for each constraint. In this work, we propose a novel NAS algorithm that encodes user preferences for the trade-off between performance and hardware metrics, and yields representative and diverse architectures across multiple devices in just one search run. To this end, we parameterize the joint architectural distribution across devices and multiple objectives via a hypernetwork that can be conditioned on hardware features and preference vectors, enabling zero-shot transferability to new devices. Extensive experiments with up to 19 hardware devices and 3 objectives showcase the effectiveness and scalability of our method. Finally, we show that, without extra costs, our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets, including MobileNetV3 on ImageNet-1k, an encoder-decoder transformer space for machine translation and a decoder-only transformer space for language modelling. | [
"NAS",
"Hardware-awareness",
"Multi-objective Optimization",
"Differentiable Optimization"
] | https://openreview.net/pdf?id=l6COqSWzi9 | aVcPfwVnYo | decision | 1,718,651,401,281 | l6COqSWzi9 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
l6COqSWzi9 | Multi-objective Differentiable Neural Architecture Search | [
"Rhea Sanjay Sukthanker",
"Arber Zela",
"Benedikt Staffler",
"Samuel Dooley",
"Josif Grabocka",
"Frank Hutter"
] | Pareto front profiling in multi-objective optimization (MOO), i.e. finding a diverse set of Pareto optimal solutions, is challenging, especially with expensive objectives like neural network training. Typically, in MOO neural architecture search (NAS), we aim to balance performance and hardware metrics across devices. Prior NAS approaches simplify this task by incorporating hardware constraints into the objective function, but profiling the Pareto front necessitates a computationally expensive search for each constraint. In this work, we propose a novel NAS algorithm that encodes user preferences for the trade-off between performance and hardware metrics, and yields representative and diverse architectures across multiple devices in just one search run. To this end, we parameterize the joint architectural distribution across devices and multiple objectives via a hypernetwork that can be conditioned on hardware features and preference vectors, enabling zero-shot transferability to new devices. Extensive experiments with up to 19 hardware devices and 3 objectives showcase the effectiveness and scalability of our method. Finally, we show that, without extra costs, our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets, including MobileNetV3 on ImageNet-1k, an encoder-decoder transformer space for machine translation and a decoder-only transformer space for language modelling. | [
"NAS",
"Hardware-awareness",
"Multi-objective Optimization",
"Differentiable Optimization"
] | https://openreview.net/pdf?id=l6COqSWzi9 | MZj2ozy0cF | meta_review | 1,718,636,239,897 | l6COqSWzi9 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission2/Area_Chair_xuaD"
] | metareview: The paper received a single review. Upon checking the paper, the AC is positive about the proposed method, MODNAS, a new approach for hardware-aware NAS that encodes user preferences for the trade-off between performance and hardware metrics, and yields representative and diverse architectures in just one run. The method is tested on a large number of hardware targets and benchmarks, where it shows promising results.
recommendation: Accept (Poster)
confidence: 4 |
Subsets and Splits