forum_id
stringlengths
8
20
forum_title
stringlengths
4
171
forum_authors
sequencelengths
0
25
forum_abstract
stringlengths
4
4.27k
forum_keywords
sequencelengths
1
10
forum_pdf_url
stringlengths
38
50
note_id
stringlengths
8
13
note_type
stringclasses
6 values
note_created
int64
1,360B
1,736B
note_replyto
stringlengths
8
20
note_readers
sequencelengths
1
5
note_signatures
sequencelengths
1
1
note_text
stringlengths
10
16.6k
l6COqSWzi9
Multi-objective Differentiable Neural Architecture Search
[ "Rhea Sanjay Sukthanker", "Arber Zela", "Benedikt Staffler", "Samuel Dooley", "Josif Grabocka", "Frank Hutter" ]
Pareto front profiling in multi-objective optimization (MOO), i.e. finding a diverse set of Pareto optimal solutions, is challenging, especially with expensive objectives like neural network training. Typically, in MOO neural architecture search (NAS), we aim to balance performance and hardware metrics across devices. Prior NAS approaches simplify this task by incorporating hardware constraints into the objective function, but profiling the Pareto front necessitates a computationally expensive search for each constraint. In this work, we propose a novel NAS algorithm that encodes user preferences for the trade-off between performance and hardware metrics, and yields representative and diverse architectures across multiple devices in just one search run. To this end, we parameterize the joint architectural distribution across devices and multiple objectives via a hypernetwork that can be conditioned on hardware features and preference vectors, enabling zero-shot transferability to new devices. Extensive experiments with up to 19 hardware devices and 3 objectives showcase the effectiveness and scalability of our method. Finally, we show that, without extra costs, our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets, including MobileNetV3 on ImageNet-1k, an encoder-decoder transformer space for machine translation and a decoder-only transformer space for language modelling.
[ "NAS", "Hardware-awareness", "Multi-objective Optimization", "Differentiable Optimization" ]
https://openreview.net/pdf?id=l6COqSWzi9
9wNTxZbnQ5
official_review
1,718,292,415,102
l6COqSWzi9
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission2/Reviewer_xdDy" ]
title: A multi-model algorithms for neural architecture search balancing accuracy performances and hardware requirements summary: The authors propose a multi-objective differentiable neural architecture search algorithm (MODNAS). The algorithm allows to predict the Pareto front for multi-objective optimization, where the objectives include model accuracy and hardware latency. The algorithm relies on several models. First, an _MetaHyperNetwork_ predicts an unnormalized architecture distribution based on hardware latency inputs and objective preferences. Then, from this distribution an _Architect_ sampler that returns discrete architectural configuration. A _Supernetwork_ estimates the accuracy of the selected architecture, while _MetaPredictor_ estimates hardware objectives. The _MetaHyperNetwork_ is trained with multi-objective gradient descent. At inference, the Pareto front is provided by generating predictions for 24 different preference vectors. strengths: - The proposed architecture has been tested on 4 different search spaces and with 19 different hardware; - The paper provide an extensive set of experiments and comparisons with baseline; - Compared on the hypervolume of the Pareto front, the propose approach outperforms several baselines from the literature. weaknesses: - The paper is dense (and long), with many key concepts and algorithms. It would gain in clarity by providing a more organized and progressive description; - The training process and especially the data collection could be detailed more explicitly; - Although the authors compare experimentally MODNAS to many other approaches, it would have been interesting to emphasize on the core theoretical differences. (I might also sensitive to this point because of my lack of expertise in NAS); - Although MODNAS could theoretically accommodate more hardware features, it would be interesting to check results on more features closer to real applications. confidence: 2
ksdbauVu00
Resource-constrained Neural Architecture Search on Language Models: A Case Study
[ "Andreas Paraskeva", "Joao Pedro Reis", "Suzan Verberne", "Jan N. van Rijn" ]
Transformer-based language models have achieved milestones in natural language processing, but they come with challenges, mainly due to their computational footprint. Applying automated machine learning to these models can democratize their use and foster further research and development. We present a case study using neural architecture search (NAS) to optimize DistilBERT in a resource-constrained environment with a $4\,000$ GPU-hour budget. We employ an evolutionary algorithm that uses a two-level hierarchical search space and a segmented pipeline for component enhancement. While in order to obtain state-of-the-art results more compute budget is required, our results show efficient exploration, and a strong correlation between pre-training and downstream performance. This suggests a potential applicability of using pre-training validation as a cutoff criterion during model training. Finally, our learning curves analysis emphasizes the potential for efficient resource allocation through the adoption of an epoch-level stopping strategy, thus directing resources towards more promising candidate models. Future work should focus on scaling these insights to larger language models and more diverse tasks.
[ "LLM", "NAS", "AutoML and Genetic Algorithms." ]
https://openreview.net/pdf?id=ksdbauVu00
zkojLV4uxQ
official_review
1,718,339,417,296
ksdbauVu00
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission35/Reviewer_G42A" ]
title: Weak Reject: The paper applies NAS to DistilBERT using NSGA-2 for evolutionary search. However, the proposed approach has limited novelty and results can be improved significantly. summary: The paper performs a case study of applying NAS to DistilBERT using NSGA-2 for evolutionary search and a hierarchical search space for the transformer architecture. strengths: - The paper has an interesting finding that there is a good correlation between the pre-training loss and downstream task performance on SQuAD. weaknesses: - There have been several weight-sharing NAS approaches in the literature that have used NSGA-2 for evolutionary search on Transformer based architectures. Distinctions of the proposed approach with existing methods are not clearly highlighted in the paper. - Results were generated only on the SQuAD v1.1 dataset. Some additional results on GLUE would help strengthen the paper. - The proposed approach seems to be compute intensive and the paper lacks comparisons with existing techniques on accuracy or compute requirements. - Plots and figures for representing the results can be improved significantly. For example, the results in Figure-1 are confusing and difficult to understand. - More details on the search space used are missing in the paper. - Overall writing of the paper can be improved. confidence: 5 suggestions: - Generating results on certain GLUE tasks in addition to SQuAD can help strengthen the paper. Also, it might be better to include separate plots for F1 and EM scores instead of just showing the average results. - The figures representing the results can be improved. The authors can consider including a Table that shows the trade-off between accuracy loss and model size reduction. - More details on the search space used can be included to better understand which parameters contribute to the reduction in model size.
ksdbauVu00
Resource-constrained Neural Architecture Search on Language Models: A Case Study
[ "Andreas Paraskeva", "Joao Pedro Reis", "Suzan Verberne", "Jan N. van Rijn" ]
Transformer-based language models have achieved milestones in natural language processing, but they come with challenges, mainly due to their computational footprint. Applying automated machine learning to these models can democratize their use and foster further research and development. We present a case study using neural architecture search (NAS) to optimize DistilBERT in a resource-constrained environment with a $4\,000$ GPU-hour budget. We employ an evolutionary algorithm that uses a two-level hierarchical search space and a segmented pipeline for component enhancement. While in order to obtain state-of-the-art results more compute budget is required, our results show efficient exploration, and a strong correlation between pre-training and downstream performance. This suggests a potential applicability of using pre-training validation as a cutoff criterion during model training. Finally, our learning curves analysis emphasizes the potential for efficient resource allocation through the adoption of an epoch-level stopping strategy, thus directing resources towards more promising candidate models. Future work should focus on scaling these insights to larger language models and more diverse tasks.
[ "LLM", "NAS", "AutoML and Genetic Algorithms." ]
https://openreview.net/pdf?id=ksdbauVu00
pghqcVfj2y
official_review
1,718,047,263,033
ksdbauVu00
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission35/Reviewer_mg5a" ]
title: Review: "Resource-constrained Neural Architecture Search on Language Models: A Case Study". summary: This paper investigates the application of neural architecture search to optimize the DistilBERT language model in a resource-constrained environment with a 4000 GPU-hour budget. By employing an evolutionary algorithm and a two-level hierarchical search space, the authors aim to enhance the efficiency of NAS. They segment the pre-training phase into two parts to improve resource allocation. The study finds a strong correlation between pre-training validation and downstream task performance, suggesting potential efficiency improvements in the NAS process. However, the work is limited in scale, focusing on a smaller model and evaluating a relatively small number of architectures and lacks comparison to both DistilBERT as a baseline and other established NAS methodologies. strengths: The paper introduces a segmented pre-training phase, breaking down the common pre-training into two phases. This method offers a more efficient evaluation loop in NAS by potentially saving computational resources and improving the selection process for promising models. Additionally, this study provides some insights into resource allocation strategies, highlighting the use of pre-training validation as a cutoff criterion and the adoption of an epoch-level stopping strategy. weaknesses: While the architectural components defining the search space are mentioned, the paper does not provide exact values or ranges for these parameters. This omission makes the search space size unknown and hinders the reproducibility of the results. The experiments are conducted on DistilBERT, a smaller language model, which may not generalize well to larger, more complex models. Additionally, only 44 models were evaluated during the search, which is relatively low compared to other state-of-the-art NAS studies. The evaluation focuses solely on a single downstream task (question-answering), limiting the applicability of the findings to other NLP tasks. The paper does not present or cite any baseline for comparison, making it unclear what the benefit of the proposed method is relative to established state-of-the-art techniques. confidence: 4 suggestions: Please provide detailed information on the exact values or ranges of parameters defining the search space. This will enhance the reproducibility of the results and allow for better comparison with other studies. Extend the experiments to include other language models and a wider variety of downstream tasks to better assess the generalizability of the approach. Present comparisons with established baselines to clearly demonstrate the advantages and improvements offered by the proposed method. Please verify the captions in Figure 2. If the "Model Size ratio" is defined as the ratio of the original to optimized model size, and assuming the optimized model is smaller than the original, the values should logically be greater than 1.
ksdbauVu00
Resource-constrained Neural Architecture Search on Language Models: A Case Study
[ "Andreas Paraskeva", "Joao Pedro Reis", "Suzan Verberne", "Jan N. van Rijn" ]
Transformer-based language models have achieved milestones in natural language processing, but they come with challenges, mainly due to their computational footprint. Applying automated machine learning to these models can democratize their use and foster further research and development. We present a case study using neural architecture search (NAS) to optimize DistilBERT in a resource-constrained environment with a $4\,000$ GPU-hour budget. We employ an evolutionary algorithm that uses a two-level hierarchical search space and a segmented pipeline for component enhancement. While in order to obtain state-of-the-art results more compute budget is required, our results show efficient exploration, and a strong correlation between pre-training and downstream performance. This suggests a potential applicability of using pre-training validation as a cutoff criterion during model training. Finally, our learning curves analysis emphasizes the potential for efficient resource allocation through the adoption of an epoch-level stopping strategy, thus directing resources towards more promising candidate models. Future work should focus on scaling these insights to larger language models and more diverse tasks.
[ "LLM", "NAS", "AutoML and Genetic Algorithms." ]
https://openreview.net/pdf?id=ksdbauVu00
ke1CfEadLJ
official_review
1,718,195,543,379
ksdbauVu00
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission35/Reviewer_Zc4S" ]
title: Very good ideas; less good methodolgy? summary: I am not an expert in LLM nor transformers but I have studied black-box optimization including NSGA-II. This paper proposes neural architecture search to find transformer architectures that maximizes masked language modelling score while minizming the number of transfromer parameters. In particular, NSGA-II is used to explore the space of feed forward layer (size, number) and multi-head attention layer of the transformer. For me the ideas and the approach proposed in this paper are great but the experimental results are not too convicing. We should accept this paper. strengths: - Important problem - Well written. For a non-expert in LLM/transformers; it was a super nice introduction to the field and its challenges. - Clear and well motivated approach on the optimization objectives and the search spaces. weaknesses: - Only 1 seed for experiments. - Not clear to me what happens in those phase 0 and phase 1 of pre-training (page 4). confidence: 4 suggestions: Overall, I really like the idea of NSGA-II applied in this hierarchal way to trade-off transformer performances and sizes. But the authors decided to directly apply and study this method on a problem too big to get interesting insights. It would be amazing to try your approach on toy transformers first. In particular, since it is not clear to me if the hierarchical search space is a novel idea, using toy problems would be faster and cheaper to study.
ksdbauVu00
Resource-constrained Neural Architecture Search on Language Models: A Case Study
[ "Andreas Paraskeva", "Joao Pedro Reis", "Suzan Verberne", "Jan N. van Rijn" ]
Transformer-based language models have achieved milestones in natural language processing, but they come with challenges, mainly due to their computational footprint. Applying automated machine learning to these models can democratize their use and foster further research and development. We present a case study using neural architecture search (NAS) to optimize DistilBERT in a resource-constrained environment with a $4\,000$ GPU-hour budget. We employ an evolutionary algorithm that uses a two-level hierarchical search space and a segmented pipeline for component enhancement. While in order to obtain state-of-the-art results more compute budget is required, our results show efficient exploration, and a strong correlation between pre-training and downstream performance. This suggests a potential applicability of using pre-training validation as a cutoff criterion during model training. Finally, our learning curves analysis emphasizes the potential for efficient resource allocation through the adoption of an epoch-level stopping strategy, thus directing resources towards more promising candidate models. Future work should focus on scaling these insights to larger language models and more diverse tasks.
[ "LLM", "NAS", "AutoML and Genetic Algorithms." ]
https://openreview.net/pdf?id=ksdbauVu00
ONZdaHqFnW
decision
1,718,651,231,672
ksdbauVu00
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
ksdbauVu00
Resource-constrained Neural Architecture Search on Language Models: A Case Study
[ "Andreas Paraskeva", "Joao Pedro Reis", "Suzan Verberne", "Jan N. van Rijn" ]
Transformer-based language models have achieved milestones in natural language processing, but they come with challenges, mainly due to their computational footprint. Applying automated machine learning to these models can democratize their use and foster further research and development. We present a case study using neural architecture search (NAS) to optimize DistilBERT in a resource-constrained environment with a $4\,000$ GPU-hour budget. We employ an evolutionary algorithm that uses a two-level hierarchical search space and a segmented pipeline for component enhancement. While in order to obtain state-of-the-art results more compute budget is required, our results show efficient exploration, and a strong correlation between pre-training and downstream performance. This suggests a potential applicability of using pre-training validation as a cutoff criterion during model training. Finally, our learning curves analysis emphasizes the potential for efficient resource allocation through the adoption of an epoch-level stopping strategy, thus directing resources towards more promising candidate models. Future work should focus on scaling these insights to larger language models and more diverse tasks.
[ "LLM", "NAS", "AutoML and Genetic Algorithms." ]
https://openreview.net/pdf?id=ksdbauVu00
G9zhunes4x
meta_review
1,718,412,768,203
ksdbauVu00
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission35/Area_Chair_WLWi" ]
metareview: The overall sentiment of the reviews appears to be borderline. I suggest accepting it as a workshop paper if the author is willing to revise the paper based on the reviewers' feedback. The revisions should clarify the tasks (e.g., search space), training process (e.g., Figure 1), hyper-parameters, and other details. More evaluation results are expected but not mandatory. I do hope the author can carefully think of the reviewer’s feedback, that the proposed approach indeed needs lots of computation. In this sense, it makes sense and is easier to start with smaller LLMs and toy problems. Very much appreciate all reviewers for providing such helpful insights and suggestions. recommendation: Accept (Poster) confidence: 4
fZqMVTz7K5
AdaMeM: Memory Efficient Momentum for Adafactor
[ "Nikhil Vyas", "Depen Morwani", "Sham M. Kakade" ]
Adafactor is a memory efficient algorithm which does not maintain momentum and has near 0 memory overhead as compared to gradient descent. However it performs worse than Adam in many setups. Prior works have shown that this gap can be removed by adding momentum to Adafactor. This comes at the cost of increased memory requirements. In this work we use the ideas of low rank optimizers such as LoRA and GaLore to maintain momentum on a low rank subspace of the weights on top of Adafactor to give a new optimizer: AdaMeM. However unlike low rank optimizers we still utilize full rank gradients but maintain momentum only on the top SVD subspace of the gradients. We show results on language modelling for models of size 210M and 550M demonstrating improved performance over Adafactor and GaLore. We also give theoretical arguments supporting the design of AdaMeM.
[ "optimization", "adam", "adafactor", "momentum", "space", "memory" ]
https://openreview.net/pdf?id=fZqMVTz7K5
wLHFJprPfl
official_review
1,718,361,439,651
fZqMVTz7K5
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission48/Reviewer_4zrj" ]
title: Memory-efficient incorporation momentum in AdaFactor summary: The paper suggests extending the AdaFactor optimizer with the low-rank approximation of the momentum term, too. The authors provide a theoretical justification of the suggested approach with different ways of tracking momentum in orthogonal subspaces generated by low-rank gradient approximation. The presented experiments confirm that the AdaMem method gives smaller loss values with a smaller memory footprint. strengths: 1. The paper is well-written, the addressed problem and the suggested approach is well-motivated 2. The experiments confirm the better trade-off between loss and memory footprint than competitors 3. The theoretical part inspires the considering momentum only in the subspace rather than complete space 4. Search of optimal hyperparameters weaknesses: 1. The runtime comparison is completely ignored by the authors 2. It is unclear what tasks are solved with the considered models and what loss is reported in the plots. 3. Some typos in the text should be fixed 4. Section 6 looks irrelevant to the aforementioned text and can be easily moved to the appendix without affecting the main focus confidence: 4 limitations: The authors do not explicitly mention any limitations of the paper. suggestions: 1. Add runtime comparison of the considered methods and vanilla SGD with momentum instead of section 6 2. Fix typos and proofread notations. For example, $N_{2,t}$ in line 11 of Alg. 1 is undefined 3. Add experiments with larger models 4. Instead of ranks in the x-axis, please report the total number of parameters or memory footprint in megabytes 5. Please add discussion on how quantization of LLM can be combined with the proposed low-rank optimizer
fZqMVTz7K5
AdaMeM: Memory Efficient Momentum for Adafactor
[ "Nikhil Vyas", "Depen Morwani", "Sham M. Kakade" ]
Adafactor is a memory efficient algorithm which does not maintain momentum and has near 0 memory overhead as compared to gradient descent. However it performs worse than Adam in many setups. Prior works have shown that this gap can be removed by adding momentum to Adafactor. This comes at the cost of increased memory requirements. In this work we use the ideas of low rank optimizers such as LoRA and GaLore to maintain momentum on a low rank subspace of the weights on top of Adafactor to give a new optimizer: AdaMeM. However unlike low rank optimizers we still utilize full rank gradients but maintain momentum only on the top SVD subspace of the gradients. We show results on language modelling for models of size 210M and 550M demonstrating improved performance over Adafactor and GaLore. We also give theoretical arguments supporting the design of AdaMeM.
[ "optimization", "adam", "adafactor", "momentum", "space", "memory" ]
https://openreview.net/pdf?id=fZqMVTz7K5
pv5n7YPRXE
decision
1,718,722,740,320
fZqMVTz7K5
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Oral) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
fZqMVTz7K5
AdaMeM: Memory Efficient Momentum for Adafactor
[ "Nikhil Vyas", "Depen Morwani", "Sham M. Kakade" ]
Adafactor is a memory efficient algorithm which does not maintain momentum and has near 0 memory overhead as compared to gradient descent. However it performs worse than Adam in many setups. Prior works have shown that this gap can be removed by adding momentum to Adafactor. This comes at the cost of increased memory requirements. In this work we use the ideas of low rank optimizers such as LoRA and GaLore to maintain momentum on a low rank subspace of the weights on top of Adafactor to give a new optimizer: AdaMeM. However unlike low rank optimizers we still utilize full rank gradients but maintain momentum only on the top SVD subspace of the gradients. We show results on language modelling for models of size 210M and 550M demonstrating improved performance over Adafactor and GaLore. We also give theoretical arguments supporting the design of AdaMeM.
[ "optimization", "adam", "adafactor", "momentum", "space", "memory" ]
https://openreview.net/pdf?id=fZqMVTz7K5
OsIFmLob4i
official_review
1,718,322,967,376
fZqMVTz7K5
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission48/Reviewer_jvbe" ]
title: AdaMeM: Memory Efficient Momentum for Adafactor summary: The paper introduces AdaMeM, a new memory-efficient optimization algorithm that derives ideas from low-rank optimizers like LoRA and GaLore. Like GaLore, AdaMeM projects gradients into a subspace and maintains momentum for these low-dimensional projected gradients. Additionally, AdaMeM performs gradient descent without momentum on the residual gradients that are outside the low-dimensional subspace. The study shows that these techniques enable AdaMeM to outperform GaLore in pre-training language models while saving memory. strengths: - AdaMeM introduces a novel approach to train language models in a memory-efficient way by applying Adafactor with momentum in a low-rank gradient space and applying Adafactor without momentum on the residual gradient. - The paper provides theoretical arguments to justify the design choices of AdaMeM, such as maintaining orthogonality between low-rank projected gradients and residual gradients. - The empirical results demonstrate the method's effectiveness in pre-training language models for two model sizes and four optimizing techniques. - Theoretical evidence is provided to show the AdaMeM memory requirements. weaknesses: - The study provides the theoretical memory requirements of the method. However, the results would be more robust if the memory and time requirements of the experiments were mentioned alongside the validation loss. - The study would also benefit from more ablation studies, such as the effect of the projection matrix update frequency on validation loss for AdaMeM. - The study will be more robust if more models of different scales (such as models ranging from 60M to 1B) and architectures are included. - The paper does not include results on how AdaMeM performs on fine-tuning tasks. confidence: 3 suggestions: - Line 24 right column and line 86 are repeated. Consider rephrasing them - Line 246,`In the Galore paper`. Use citation instead of mentioning it as a paper. Same for line 271 right column.
dpK5a5xnel
SatDiffMoE: A Mixture of Estimation Method for Satellite Image Super-resolution with Latent Diffusion Models
[ "Zhaoxu Luo", "Bowen Song", "Liyue Shen" ]
During the acquisition of satellite images, there is generally a trade-off between spatial resolution and temporal resolution (acquisition frequency) due to the onboard sensors of satellite imaging systems. High-resolution satellite images are very important for land crop monitoring, urban planning, wildfire management and a variety of applications. It is a significant yet challenging task to achieve high spatial-temporal resolution in satellite imaging. With the advent of diffusion models, we can now learn strong generative priors to generate realistic satellite images with high resolution, which can be utilized to promote the super-resolution task as well. In this work, we propose a novel diffusion-based fusion algorithm called \textbf{SatDiffMoE} that can take an arbitrary number of sequential low-resolution satellite images at the same location as inputs, and fuse them into one high-resolution reconstructed image with more fine details, by leveraging and fusing the complementary information from different time points. Our algorithm is highly flexible and allows training and inference on arbitrary number of low-resolution images. Experimental results show that our proposed SatDiffMoE method not only achieves superior performance for the satellite image super-resolution tasks on a variety of datasets, but also gets an improved computational efficiency with reduced model parameters, compared with previous methods.
[ "diffusion models", "super-resolution" ]
https://openreview.net/pdf?id=dpK5a5xnel
iSompyKTNc
official_review
1,718,235,047,368
dpK5a5xnel
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission45/Reviewer_WxmB" ]
title: Good description and comparisons, concerns about applicability summary: The article proposes to use a latent diffusion model for super-resolution with application to remote sensing. The approach is interesting because this field is not like most computer vision tasks, as the time series aspect of image acquisition must be taken into account. strengths: Using diffusion for image restoration is not novel, for example [1,2]. In general super-resolution by generative methods (such as AE or GANS, now by diffusion) is suspicious, as it does not really simply solve the inverse problem of augmenting the resolution by a deconvolution-like approach, but rather replaces an image with plausible content based on the appearance of a low-resolution version of that image. From that point of view it works rather similarly to inpainting, and indeed both [1] and [2] methods can be used indifferently for either. However for scientific purposes this approach yields unsatisfactory results. Imagine augmenting the resolution of the Hubble telescope artificially, putting plausible stars and galaxies were none exit in reality. This would be scientifically pointless. Another example that makes perhaps more sense would be to sharpen a blurry image of a license plate perfectly like at the movies, with plausible but totally invented letters and figures, which would be also pointless. Here satellite imagerie is rather at the scientific end of computer vision in the sense that augmenting the resolution of the sensor only makes senses if if it not only plausible but useful practically; and not only to produce pretty pictures that have nothing to do with what is actually on the ground to be studied. This aspect is not discussed in the paper but the results shown on Fig. 3 exemplify my point perfectly. The generated image all look good to some degree but have little to do with what is actually on the ground. But the proposed method does provide results that are not only plausible but closer to what is actually there compared to the baseline. Here the authors obtain state of the art results on several open datasets compared with SR methods. The computer vision-based methods all provide useless results; whereas those (including those of the paper) do provide better results. [1] Zongsheng Y. et al. ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting (NeurIPS 2023, Spotlight) [2] Yufei W. et al. SinSR: Diffusion-Based Image Super-Resolution in a Single Step, CVPR 2024 weaknesses: The authors do not discuss implementation, reproducibility of their results or efficiency. Real-world remote sensing dataset are enormous and efficiency is paramount. They also need to discuss real-world application of their work. confidence: 5 limitations: - Plausibility of the results - Questionable usefulness - Efficiency - Code availability - Reproducibility. suggestions: - Address the above points - Add some words related to the focus of the workshop. The current version of the paper looks off-topic to me.
dpK5a5xnel
SatDiffMoE: A Mixture of Estimation Method for Satellite Image Super-resolution with Latent Diffusion Models
[ "Zhaoxu Luo", "Bowen Song", "Liyue Shen" ]
During the acquisition of satellite images, there is generally a trade-off between spatial resolution and temporal resolution (acquisition frequency) due to the onboard sensors of satellite imaging systems. High-resolution satellite images are very important for land crop monitoring, urban planning, wildfire management and a variety of applications. It is a significant yet challenging task to achieve high spatial-temporal resolution in satellite imaging. With the advent of diffusion models, we can now learn strong generative priors to generate realistic satellite images with high resolution, which can be utilized to promote the super-resolution task as well. In this work, we propose a novel diffusion-based fusion algorithm called \textbf{SatDiffMoE} that can take an arbitrary number of sequential low-resolution satellite images at the same location as inputs, and fuse them into one high-resolution reconstructed image with more fine details, by leveraging and fusing the complementary information from different time points. Our algorithm is highly flexible and allows training and inference on arbitrary number of low-resolution images. Experimental results show that our proposed SatDiffMoE method not only achieves superior performance for the satellite image super-resolution tasks on a variety of datasets, but also gets an improved computational efficiency with reduced model parameters, compared with previous methods.
[ "diffusion models", "super-resolution" ]
https://openreview.net/pdf?id=dpK5a5xnel
XqsaREphae
official_review
1,718,029,663,446
dpK5a5xnel
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission45/Reviewer_7N8i" ]
title: Review of SatDiffMoE summary: The authors introduce SatDiffMoE, a framework for satellite image super-resolution that leverages a pretrained generative prior (i.e., stable diffusion). It comprises of two components: time difference conditioning and score estimation fusion. Promising experimental results are demonstrated. strengths: - The motivation the authors present are persuasive, in that an effective method for fusing multiple low-resolution images of irregular time steps to obtain a high-resolution signal is indeed needed for many applications. - The quantitative evaluations and ablation indicate the effectiveness of the proposed framework. weaknesses: - The paper overall lacks technical novelty. Simply adding another time embedding network (of identical architecture) to model relative time difference is a relatively straightforward idea. Running optimization steps to fuse score estimations from multiple LR images adds little novelty to the framework. - Some details on inference and evaluation are unclear. For example, how many LR images do the authors use to produce HR prediction for each benchmark dataset? What about other baseline methods the authors compare against (esp. DiffusionSat?) - It looks like SatDiffMoE needs 50 step sampling for each LR image plus the optimization steps to fuse score estimates. The authors do present ablation table that shows how the inference time increases with more LR images used for sampling, but the comparison against the baseline methods are missing. (they only report #params and #training iters, which in my opinion, is less relevant than the actual inference speed for practical applications). confidence: 3 limitations: Mentioned in the weakness section suggestions: Additional experimental results to resolve the concerns in the weakness section would be appreciated.
dpK5a5xnel
SatDiffMoE: A Mixture of Estimation Method for Satellite Image Super-resolution with Latent Diffusion Models
[ "Zhaoxu Luo", "Bowen Song", "Liyue Shen" ]
During the acquisition of satellite images, there is generally a trade-off between spatial resolution and temporal resolution (acquisition frequency) due to the onboard sensors of satellite imaging systems. High-resolution satellite images are very important for land crop monitoring, urban planning, wildfire management and a variety of applications. It is a significant yet challenging task to achieve high spatial-temporal resolution in satellite imaging. With the advent of diffusion models, we can now learn strong generative priors to generate realistic satellite images with high resolution, which can be utilized to promote the super-resolution task as well. In this work, we propose a novel diffusion-based fusion algorithm called \textbf{SatDiffMoE} that can take an arbitrary number of sequential low-resolution satellite images at the same location as inputs, and fuse them into one high-resolution reconstructed image with more fine details, by leveraging and fusing the complementary information from different time points. Our algorithm is highly flexible and allows training and inference on arbitrary number of low-resolution images. Experimental results show that our proposed SatDiffMoE method not only achieves superior performance for the satellite image super-resolution tasks on a variety of datasets, but also gets an improved computational efficiency with reduced model parameters, compared with previous methods.
[ "diffusion models", "super-resolution" ]
https://openreview.net/pdf?id=dpK5a5xnel
SjmGTLQEYE
decision
1,718,651,607,665
dpK5a5xnel
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
dpK5a5xnel
SatDiffMoE: A Mixture of Estimation Method for Satellite Image Super-resolution with Latent Diffusion Models
[ "Zhaoxu Luo", "Bowen Song", "Liyue Shen" ]
During the acquisition of satellite images, there is generally a trade-off between spatial resolution and temporal resolution (acquisition frequency) due to the onboard sensors of satellite imaging systems. High-resolution satellite images are very important for land crop monitoring, urban planning, wildfire management and a variety of applications. It is a significant yet challenging task to achieve high spatial-temporal resolution in satellite imaging. With the advent of diffusion models, we can now learn strong generative priors to generate realistic satellite images with high resolution, which can be utilized to promote the super-resolution task as well. In this work, we propose a novel diffusion-based fusion algorithm called \textbf{SatDiffMoE} that can take an arbitrary number of sequential low-resolution satellite images at the same location as inputs, and fuse them into one high-resolution reconstructed image with more fine details, by leveraging and fusing the complementary information from different time points. Our algorithm is highly flexible and allows training and inference on arbitrary number of low-resolution images. Experimental results show that our proposed SatDiffMoE method not only achieves superior performance for the satellite image super-resolution tasks on a variety of datasets, but also gets an improved computational efficiency with reduced model parameters, compared with previous methods.
[ "diffusion models", "super-resolution" ]
https://openreview.net/pdf?id=dpK5a5xnel
OpxewflZJ3
meta_review
1,718,628,437,420
dpK5a5xnel
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission45/Area_Chair_QLPX" ]
metareview: The paper makes use of diffusion models to perform super resolution on satellite images. Reviewer WxmB raises some concerns regarding the applicability of such an approach. While the AC agrees that detail hallucinations and run time requirements are valid concerns and shortcomings of this class of methods, on the balance, it's a step in the right direction. The authors should update the paper, noting this limitations, and add full implementation details and run time requirements. recommendation: Accept (Poster) confidence: 4
dRp8tAIPhj
Accelerating Best-of-N via Speculative Rejection
[ "Ruiqi Zhang", "Momin Haider", "Ming Yin", "Jiahao Qiu", "Mengdi Wang", "Peter Bartlett", "Andrea Zanette" ]
The safe and effective deployment of Large Language Models (LLMs) often involves generating helpful and benign responses, producing easily comprehensible code, and crafting content with specific stylistic preferences. While different, these tasks share the common mathematical goal of generating responses from a language model with high scores according to a metric of interest. A popular and well known decoding strategy for this purpose is the Best-of-N method. The method generates a pre-specified number of responses (N) based on a prompt, and then selects the highest-scoring response among them to be returned. While Best-of-N is both simple and effective, its reliance on generating multiple responses to score for any given prompt incurs high inference costs. In this paper we make a first step towards accelerating the Best-of-N algorithm, by halting the generation of unpromising utterances, namely those that are unlikely to be returned by the algorithm upon completion. Focusing on the alignment problem, we show that this simple strategy allows to obtain substantial speedups for the Best-of-N algorithm with minimal performance degradation.
[ "alignment", "large language models", "rejection sampling", "best-of-n", "acceleration" ]
https://openreview.net/pdf?id=dRp8tAIPhj
T7lcW6oDHg
official_review
1,718,234,345,226
dRp8tAIPhj
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission51/Reviewer_F3gB" ]
title: A simple yet efficient method to speed-up the Best-of-N algorithm without significantly degrading the scores of the selected generated texts. summary: This work proposes a simple heuristic to efficiently sample high scoring text generations according to the best-of-N algorithm. The method---called Speculative Best-of-N (SBoN)---consists in (i) generating N completions of $\tau$ tokens in parallel, then (ii) computing a rejection threshold $r_{cut}$ as a the $\alpha$-th lower quantile of rewards, and (iii) continue generating up to completion all the sequences with a score larger than $r_{cut}$, score those, and return the argmax. They demonstrate that their method can be motivated by the correlation between partial and full scores. They show how SBoN is competitive when compared to the baseline BoN method, while being faster. strengths: The method is well motivated, efficient text generation is an important research topic. The paper is easy to follow. The method is simple to implement. I find the results compelling: the speedup is significant and the SBoN scores are competitive compared to Bo100. I like the comparison to BoM, even though I am confused by how M is selected (see weaknesses). weaknesses: The novelty of the method is low, but this doesn't mean it cannot be valuable to the community. The correlation measured for smaller $\tau$ values is relatively small. It might be that the algorithm performs well not entirely due to the correlation but to the large sampling size which guarantees to find at least one sequence with a large partial score and a large final score. I see in the appendix you also show results for N=50, yet the SBoN scores are larger than 100, I am unsure how this can happen? In my understanding, the BoM baseline should have the same speedup as SBoN, I do not understand why this is not the case. Given the relatively weak correlation, SBoN introduces a bias in the generation. It could be interesting to provide some of the high scoring generated text which got selected by BoN and rejected by SBoN. confidence: 3 limitations: See weaknesses suggestions: See weaknesses
dRp8tAIPhj
Accelerating Best-of-N via Speculative Rejection
[ "Ruiqi Zhang", "Momin Haider", "Ming Yin", "Jiahao Qiu", "Mengdi Wang", "Peter Bartlett", "Andrea Zanette" ]
The safe and effective deployment of Large Language Models (LLMs) often involves generating helpful and benign responses, producing easily comprehensible code, and crafting content with specific stylistic preferences. While different, these tasks share the common mathematical goal of generating responses from a language model with high scores according to a metric of interest. A popular and well known decoding strategy for this purpose is the Best-of-N method. The method generates a pre-specified number of responses (N) based on a prompt, and then selects the highest-scoring response among them to be returned. While Best-of-N is both simple and effective, its reliance on generating multiple responses to score for any given prompt incurs high inference costs. In this paper we make a first step towards accelerating the Best-of-N algorithm, by halting the generation of unpromising utterances, namely those that are unlikely to be returned by the algorithm upon completion. Focusing on the alignment problem, we show that this simple strategy allows to obtain substantial speedups for the Best-of-N algorithm with minimal performance degradation.
[ "alignment", "large language models", "rejection sampling", "best-of-n", "acceleration" ]
https://openreview.net/pdf?id=dRp8tAIPhj
GmuyD3LVRC
official_review
1,718,308,767,203
dRp8tAIPhj
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission51/Reviewer_MHry" ]
title: Unsupported assumptions; weak experiments; vague presentation; inappropriate typesetting. summary: The authors proposed a method for reducing generation time with LLMs in Best-of-N generation strategy. The authors use word "speculative" to explain main idea of trading of batch generation and rejecting (or pruning) of completions on early staged with reward model. strengths: + The work introduces interesting perspective with impact on practical application. + In the appendix C, some theoretical guarantee described. weaknesses: ## Typesetting There are shortcommings in the draft typesetting. From my perspective, the paper should be desk rejected since there are serious ones. - Abstract consists of two paragraphs of differnt style. Specifically, the second paragraph has font size of main body which is larger than required font size in abstract. The same true for leading and paragraph spacing. - Bibliography should be reviewed and actualized: capitalization of titles, missing publication dates, journals conferences, etc (e.g. RAFT is published in #link("https://openreview.net/forum?id=m7p5O7zblY")[TMLR]). - Missing table of content in hypertext markup. - Intoductory section seems incoherent and hard to follows. - It would be better render figure 2, 3, and 4 as vector graphics. There is a light noticable lag during scrolling page 8. - Please introduce common abbreviation as early as possible. For example, "A popular and well known decoding strategy for this purpose is the Best-of-N (BoN) method". ## Major Points of Criticism ### Speculative Rejection + Term "speculative rejection" is not defined in section "Speculative Rejection". + Statement about correlation between final reward and partial reward seems unconvincing. Specifically, Figure 2 looks a little bit strange. I would say that the blue line should be located more in the point cloud (e.g. $y = 4 x - 4x$). Also, this experiment reveals correlation between one specific prompt. It makes me doubt about generality of your claim. It would better to take statistically significant number of prompts and generate multiple continuations for each prompt. Optionally, this experiment should be repeated for other pair of language model and reward model. I'd like suggest authors to dig in this idea deeper and study this monotonicity property more thorough. It seems fruitfull finding which might result in other applications. ### Experiment + Comparison among different BoN methods are solid. However, there is a lack of comparison with other speculative generation techniques. They give approximately the same speed up thus it is unclear wheter the proposed method is competitive. + Also, trivial generalization of the proposed method is not studied. For example, there is only one threshold time $tau$. Why not two? May be series of exponentially spaced $tau_k$ is the best? ### Efficiency There is no discussion of joint use of speculative rejection and speculative decoding techniques (e.g. (Leviathan, 2022)). It is obviously possible to use them simulteneously but some engineering issues related to block size and batch raise. Also, this issue opens another question related to sharing weights in draft (surrogate) model and reward model. ## Others ### Related Works + There are unsuitable references. For example in *Inference Efficiency in LLMs*. QLoRA is a PEFT + quantization for training. Correct general examples of quantization (PQT) is AQLM and QUIP\# which are SOTA post-training quantization techniques. There are other quantization approaches which requires training (AQT) or quantization of KV-caches for efficient inference. + Reference to vLLM is not enought since there are other competitive techniques like DeepSpeed and TensorRT. ### Speculative Best-of-N (SBoN) + There is missing definition for $cal(l)_k$ (missing in main body and unclear use in Algo.1) ### Problem Formulation + Please rewrite Best-of-N optimization problem as a block equation. Now it looks quite blurry while it is a short meaningful formal definition. Also, use the same notation throughout the text (e.g. symbols $=$ and $:=$ are used ### Appendix + I think that the theoretical guarantee described in the appendix must be presented in the main body at least as the theorem C.1 + From my perspective, there are too many figures in the appendix. It is hard to follow each of them. confidence: 5
dRp8tAIPhj
Accelerating Best-of-N via Speculative Rejection
[ "Ruiqi Zhang", "Momin Haider", "Ming Yin", "Jiahao Qiu", "Mengdi Wang", "Peter Bartlett", "Andrea Zanette" ]
The safe and effective deployment of Large Language Models (LLMs) often involves generating helpful and benign responses, producing easily comprehensible code, and crafting content with specific stylistic preferences. While different, these tasks share the common mathematical goal of generating responses from a language model with high scores according to a metric of interest. A popular and well known decoding strategy for this purpose is the Best-of-N method. The method generates a pre-specified number of responses (N) based on a prompt, and then selects the highest-scoring response among them to be returned. While Best-of-N is both simple and effective, its reliance on generating multiple responses to score for any given prompt incurs high inference costs. In this paper we make a first step towards accelerating the Best-of-N algorithm, by halting the generation of unpromising utterances, namely those that are unlikely to be returned by the algorithm upon completion. Focusing on the alignment problem, we show that this simple strategy allows to obtain substantial speedups for the Best-of-N algorithm with minimal performance degradation.
[ "alignment", "large language models", "rejection sampling", "best-of-n", "acceleration" ]
https://openreview.net/pdf?id=dRp8tAIPhj
BDUIwQnVL8
official_review
1,718,575,525,912
dRp8tAIPhj
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission51/Reviewer_Fe7c" ]
title: Interesting work with some limitations summary: This paper presents a novel technique to speed up the Best-of-N method for decoding with early stopping of some of the utterances which they pre-maturely identity as wrong or undesirable. The technique seems pretty straightforward and useful. The technique is called speculative rejection and share some overall with speculative decoding and beam search decoding. strengths: Some major strengths are listed below. * This is a well written paper. * The problem, motivation and proposed method is very clearly stated and makes intuitive sense. * This results are shown with reasonably large sized models. weaknesses: Although I would like to emphasis that this is a good paper but it has some limitations which if addressed this could be great paper. * Although the paper says that it is trying to speed up BoN and compares only with BoN and its derivatives as baselines I believe some more non BoN type baseline can helpful to see overall efficacy of this method. Overall I believe more relevant baselines are required. * We know that KV caching helps in generation speedups, since the core claim of this paper is to faster generations, I would like to request authors to make some non decoding method as baseline as well and compare against it or simply show that it can further improve the speed of generation. * The authors have not discussed about the quality of prompts i.e. does this method works for hard prompts? A new table can be added to show efficacy of this method for easy, medium and hard prompts. confidence: 3 limitations: Already written as weakness. suggestions: If the weaknesses are addressed I think this could be a great paper.
dRp8tAIPhj
Accelerating Best-of-N via Speculative Rejection
[ "Ruiqi Zhang", "Momin Haider", "Ming Yin", "Jiahao Qiu", "Mengdi Wang", "Peter Bartlett", "Andrea Zanette" ]
The safe and effective deployment of Large Language Models (LLMs) often involves generating helpful and benign responses, producing easily comprehensible code, and crafting content with specific stylistic preferences. While different, these tasks share the common mathematical goal of generating responses from a language model with high scores according to a metric of interest. A popular and well known decoding strategy for this purpose is the Best-of-N method. The method generates a pre-specified number of responses (N) based on a prompt, and then selects the highest-scoring response among them to be returned. While Best-of-N is both simple and effective, its reliance on generating multiple responses to score for any given prompt incurs high inference costs. In this paper we make a first step towards accelerating the Best-of-N algorithm, by halting the generation of unpromising utterances, namely those that are unlikely to be returned by the algorithm upon completion. Focusing on the alignment problem, we show that this simple strategy allows to obtain substantial speedups for the Best-of-N algorithm with minimal performance degradation.
[ "alignment", "large language models", "rejection sampling", "best-of-n", "acceleration" ]
https://openreview.net/pdf?id=dRp8tAIPhj
0aCB0O2R8q
decision
1,718,721,948,577
dRp8tAIPhj
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
dKRQ1XC0op
Adaptive Model Pruning in Federated Learning through Loss Exploration
[ "Christian Internò", "Elena Raponi", "Niki van Stein", "Thomas Bäck", "Markus Olhofer", "Yaochu Jin", "Barbara Hammer" ]
The rapid proliferation of smart devices coupled with the advent of 6G networks has profoundly reshaped the domain of collaborative machine learning. Alongside growing privacy-security concerns in sensitive fields, these developments have positioned federated learning (FL) as a pivotal technology for decentralized model training. Despite its vast potential, FL encounters challenges such as elevated communication costs, computational constraint, and the complexities of non-IID data distributions. We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive model pruning. This innovative mechanism automatically identifies and prunes unimportant model parameters by distilling knowledge on model gradients behavior across different non-IID client losses, thereby optimizing computational efficiency and enhancing model performance on resource-constrained scenarios. Extensive experiments across various datasets and FL tasks reveal that AutoFLIP not only efficiently accelerates global convergence but also achieves superior accuracy and robustness compared to traditional methods. On average, AutoFLIP reduces computational overhead by 48.8% and communication costs by 35.5%, while maintaining high accuracy. By significantly reducing these overheads, AutoFLIP offer the way for efficient FL deployment in real-world applications, from healthcare to smart cities.
[ "Complexity Reduction", "Federated Learning", "Pruning", "Knowledge transfer", "Non-IID data", "Deep Learning" ]
https://openreview.net/pdf?id=dKRQ1XC0op
xPPyCjx3ri
meta_review
1,718,688,721,176
dKRQ1XC0op
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission15/Area_Chair_QVFK" ]
metareview: This submission introduces an adaptive pruning strategy for the federated learning setting; the technique involves computing weight importance during the FL exploration phase. Reviewer sentiment for this submission appears positive overall - the idea appears to be novel and has been explained well. However, writing (especially notation) could be improved and the objectives more clearly stated. I recommend acceptance (poster). recommendation: Accept (Poster) confidence: 3
dKRQ1XC0op
Adaptive Model Pruning in Federated Learning through Loss Exploration
[ "Christian Internò", "Elena Raponi", "Niki van Stein", "Thomas Bäck", "Markus Olhofer", "Yaochu Jin", "Barbara Hammer" ]
The rapid proliferation of smart devices coupled with the advent of 6G networks has profoundly reshaped the domain of collaborative machine learning. Alongside growing privacy-security concerns in sensitive fields, these developments have positioned federated learning (FL) as a pivotal technology for decentralized model training. Despite its vast potential, FL encounters challenges such as elevated communication costs, computational constraint, and the complexities of non-IID data distributions. We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive model pruning. This innovative mechanism automatically identifies and prunes unimportant model parameters by distilling knowledge on model gradients behavior across different non-IID client losses, thereby optimizing computational efficiency and enhancing model performance on resource-constrained scenarios. Extensive experiments across various datasets and FL tasks reveal that AutoFLIP not only efficiently accelerates global convergence but also achieves superior accuracy and robustness compared to traditional methods. On average, AutoFLIP reduces computational overhead by 48.8% and communication costs by 35.5%, while maintaining high accuracy. By significantly reducing these overheads, AutoFLIP offer the way for efficient FL deployment in real-world applications, from healthcare to smart cities.
[ "Complexity Reduction", "Federated Learning", "Pruning", "Knowledge transfer", "Non-IID data", "Deep Learning" ]
https://openreview.net/pdf?id=dKRQ1XC0op
jClHB3EBSo
official_review
1,717,743,390,297
dKRQ1XC0op
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission15/Reviewer_mPok" ]
title: The main idea is clear and natural but the paper needs improvement. summary: The paper proposes a model pruning approach in federated learning to combat communication overhead and the effects of non-IID datasets across clients. They achieved this by conducting an exploration phase before performing federated learning, where the impact of each parameter on each client's dataset is determined. They demonstrated the effectiveness of their method through experiments. strengths: The idea of determining the importance of each parameter for each client based on the local data seems like a natural solution. The paper's main idea is explained well. weaknesses: There are some writing problems. The Notation section is repeated in the next section, and the line numbers for the algorithms are all zero. Equation 3 also has notation issues. The objective is not clearly expressed, whether the pruning is intended to combat non-IID data or improve communication efficiency. The authors claim multiple times that their objective is to minimize variance through uniform pruning but provide no proof that this method minimizes variance during federated learning. There are ambiguities without explanations, such as E_exp = 150. Additionally, there are a few new parameters like E_exp, C_exp, and T_p for which no guidance is provided on how to choose them. confidence: 4
dKRQ1XC0op
Adaptive Model Pruning in Federated Learning through Loss Exploration
[ "Christian Internò", "Elena Raponi", "Niki van Stein", "Thomas Bäck", "Markus Olhofer", "Yaochu Jin", "Barbara Hammer" ]
The rapid proliferation of smart devices coupled with the advent of 6G networks has profoundly reshaped the domain of collaborative machine learning. Alongside growing privacy-security concerns in sensitive fields, these developments have positioned federated learning (FL) as a pivotal technology for decentralized model training. Despite its vast potential, FL encounters challenges such as elevated communication costs, computational constraint, and the complexities of non-IID data distributions. We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive model pruning. This innovative mechanism automatically identifies and prunes unimportant model parameters by distilling knowledge on model gradients behavior across different non-IID client losses, thereby optimizing computational efficiency and enhancing model performance on resource-constrained scenarios. Extensive experiments across various datasets and FL tasks reveal that AutoFLIP not only efficiently accelerates global convergence but also achieves superior accuracy and robustness compared to traditional methods. On average, AutoFLIP reduces computational overhead by 48.8% and communication costs by 35.5%, while maintaining high accuracy. By significantly reducing these overheads, AutoFLIP offer the way for efficient FL deployment in real-world applications, from healthcare to smart cities.
[ "Complexity Reduction", "Federated Learning", "Pruning", "Knowledge transfer", "Non-IID data", "Deep Learning" ]
https://openreview.net/pdf?id=dKRQ1XC0op
TPBLQI4N9Y
decision
1,718,721,489,243
dKRQ1XC0op
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
cCEMDCQu7r
Single Train Multi Deploy on Topology Search Spaces using Kshot-Hypernet
[ "Jingyue Zhuge", "Christian Mayr", "Anand Subramoney", "David Kappel" ]
Neural Architecture Search (NAS) has long been an important research direction, to replace labor-intensive manual architecture search. Since the introduction of weight sharing in NAS, the resource and time consumption of architecture searches has been significantly reduced. In addition, variants of NAS methods have been proposed that eliminate the need for retraining by inferring model parameters directly from the shared weights after the search. However, these methods are mainly based on the MobileNet search space, which is primarily used for size searches. For the important topology search space, no NAS method has been proposed that does not require retraining. In this work, we fill this gap by proposing a NAS method that does not require retraining based on the topology search space. Our method combines the advantages of previously proposed Hypernetwork and Kshot-NAS. We also propose a new distillation and sampling method for this new NAS architecture. We present results on NAS-Bench-201 and show that our method matches or even exceeds the baseline performance of post-search retraining.
[ "Efficient Neural Architecture Search", "AutoML" ]
https://openreview.net/pdf?id=cCEMDCQu7r
wIAgiVKR71
official_review
1,718,209,772,498
cCEMDCQu7r
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission28/Reviewer_b9w2" ]
title: The paper proposes a NAS method that does not require retraining based on the topology search space summary: The paper proposes a NAS method that does not require retraining based on the topology search space, instead of size space. The method is able to train overlapping (but not totally subordinate) sub-networks as a part of a super-network. The method combines the advantages of previously proposed Hypernetwork and Kshot-NAS, as well as a distillation and sampling methods. Performances are shown on NAS-Bench201 and show that the method matches and possibly exceeds the baseline performance of post-search retraining. strengths: - The paper is tackling the topic of sub-network search that may achieve efficient deployment without retraining. - The paper proposes a sampling method with a high probability of sampling performant sub-networks while minimizing the impact on the remaining sub-networks. - Some experiments show the impact of the proposed method against retraining. weaknesses: # Contribution - The proposed method is a combination of widely known/existing concepts in simultaneous super and sub-networks training: one may expect a clearer and more precise explanation of what is really novel at least in the bullet points of page 3. # Motivation of the work - A very important discussion of the relationships between "size search" and "topology search" is insufficient in the paper: when the former is useful instead of the latter? and vice versa? In the sentence "Although the size search space is still a very large search space that can be effectively searched for different devices, this search space also has significant drawbacks.": the author may elaborate further about what are these drawbacks, and when one may use topology search instead (early enough in the paper). # Reference/discussion of prior work - Some references on training multiple networks are missing: searching sub-networks based on their topology (without retraining) is not a new concept, and related work exists and some of the most recent work is not cited in this paper. Some of the existing methods are able to define topologies that could be trained simultaneously as a part of super-network training. # Experiments - Experiments involve search spaces with 15625 architectures, so one may question the extension of this method to larger architectures and larger search spaces. - Comparisons involving more datasets (and possibly other architectures) are missing which makes it difficult to judge the generalization of the proposed method to more challenging settings. # Presentation, clarity and writing - The presentation of the paper as well as its writing need to be substantially improved (examples below). - In the sentence "However, as mentioned earlier, these are all based on the small search space and will not search for new architectures, all of which are variants of MobileNet, but the methods proposed are very meaningful.": what methods are meaningful ? in what sense? In this part of the paper, one may expect more details about the technical differences of these related methods against the author's claimed contribution. - The sentence "Weight sharing has not been theoretically verified." is not clear ... same remark for "the ranking correlation of super-networks based on weight sharing": what is the ranking correlation? same remark for "the mutual interference" ... all these concepts (even introduced in the introduction and related work) need to be clarified and reminded ... in order to make the paper clear and easier to follow. - Section about FocusFair sampling needs a better clarification of different steps; many statements in this subsection are not clear. For instance "higher architecture parameters β between each pair of nodes" is not clear: what are these pairs of nodes? - Some figure captions are very brief, and need to be further expanded. - Important to define acronyms at their first use: OFA, GHN, etc. Same remark for the used variables in the math. - English usage needs to be improved: some parts of the paper are difficult to follow and contain multiple repetitions of words and expressions (ex. lines 55 to 91; where the expression "search space" is repeated 14 times in the same paragraph). Another example in the sentence "The network is stacked from search blocks, and we can search for one architecture for each block": there is a mix of active and passive forms within a short distance in the text, and also in the sentence "so that its sub-networks can also work.": what "work"? another example "weights the K weights", etc. - Typos and suggested updates: "for each platforms" -> "for each platform", "like us" -> "similarly to our method",... confidence: 5
cCEMDCQu7r
Single Train Multi Deploy on Topology Search Spaces using Kshot-Hypernet
[ "Jingyue Zhuge", "Christian Mayr", "Anand Subramoney", "David Kappel" ]
Neural Architecture Search (NAS) has long been an important research direction, to replace labor-intensive manual architecture search. Since the introduction of weight sharing in NAS, the resource and time consumption of architecture searches has been significantly reduced. In addition, variants of NAS methods have been proposed that eliminate the need for retraining by inferring model parameters directly from the shared weights after the search. However, these methods are mainly based on the MobileNet search space, which is primarily used for size searches. For the important topology search space, no NAS method has been proposed that does not require retraining. In this work, we fill this gap by proposing a NAS method that does not require retraining based on the topology search space. Our method combines the advantages of previously proposed Hypernetwork and Kshot-NAS. We also propose a new distillation and sampling method for this new NAS architecture. We present results on NAS-Bench-201 and show that our method matches or even exceeds the baseline performance of post-search retraining.
[ "Efficient Neural Architecture Search", "AutoML" ]
https://openreview.net/pdf?id=cCEMDCQu7r
ftM8XVGWxS
decision
1,718,651,551,718
cCEMDCQu7r
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
cCEMDCQu7r
Single Train Multi Deploy on Topology Search Spaces using Kshot-Hypernet
[ "Jingyue Zhuge", "Christian Mayr", "Anand Subramoney", "David Kappel" ]
Neural Architecture Search (NAS) has long been an important research direction, to replace labor-intensive manual architecture search. Since the introduction of weight sharing in NAS, the resource and time consumption of architecture searches has been significantly reduced. In addition, variants of NAS methods have been proposed that eliminate the need for retraining by inferring model parameters directly from the shared weights after the search. However, these methods are mainly based on the MobileNet search space, which is primarily used for size searches. For the important topology search space, no NAS method has been proposed that does not require retraining. In this work, we fill this gap by proposing a NAS method that does not require retraining based on the topology search space. Our method combines the advantages of previously proposed Hypernetwork and Kshot-NAS. We also propose a new distillation and sampling method for this new NAS architecture. We present results on NAS-Bench-201 and show that our method matches or even exceeds the baseline performance of post-search retraining.
[ "Efficient Neural Architecture Search", "AutoML" ]
https://openreview.net/pdf?id=cCEMDCQu7r
fS1zXrApfk
official_review
1,718,345,229,700
cCEMDCQu7r
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission28/Reviewer_93RX" ]
title: Weak Accept: The paper proposes a novel super-network training method for the topological search space and shows promising results on NAS-Bench-201. summary: The paper proposes a novel super-network training technique based on the topological search space in NAS-Bench-201. The proposed technique combines ideas from the Hypernetwork and the K-Shot NAS approaches to improve the expression ability and performance. Additionally, the paper also introduces a new distillation and sampling strategy for training the super-network and avoid any additional re-training. strengths: - The paper is very well written and easy to follow. - The proposed approach combines ideas from Hypernetwork and K-Shot NAS in an interesting way to solve the super-network training problem for topological search spaces and avoid having to re-train sub-networks from scratch for good performance. - The results presented in the paper for average accuracy on CIFAR10 and CIFAR100 in NAS-Bench-201 seems promising for further exploration. weaknesses: - The paper lacks comprehensive ablation studies on compute time, memory and accuracy tradeoffs using the proposed approach. For example, there is a 1-2% gap in the max accuracy compared to the baseline. Can this gap be reduced with further training? - In addition to reporting average and max accuracy, it might be better to also show the accuracy differences for certain selected sub-networks compared to the baseline. - Missing results on other larger datasets and benchmarks. The results on ImageNet-16-120 which is also part of NAS-Bench-201 is not shown in the paper. - It is not clear if the proposed approach can be extended beyond CNNs to other architectures like transformers. confidence: 4 suggestions: - The authors can consider including the accuracy differences for certain selected sub-networks of different sizes compared to the baseline, instead of just reporting the average and max accuracy. - Authors can also consider including results on ImageNet-16-120 for completeness on NAS-Bench-201
cCEMDCQu7r
Single Train Multi Deploy on Topology Search Spaces using Kshot-Hypernet
[ "Jingyue Zhuge", "Christian Mayr", "Anand Subramoney", "David Kappel" ]
Neural Architecture Search (NAS) has long been an important research direction, to replace labor-intensive manual architecture search. Since the introduction of weight sharing in NAS, the resource and time consumption of architecture searches has been significantly reduced. In addition, variants of NAS methods have been proposed that eliminate the need for retraining by inferring model parameters directly from the shared weights after the search. However, these methods are mainly based on the MobileNet search space, which is primarily used for size searches. For the important topology search space, no NAS method has been proposed that does not require retraining. In this work, we fill this gap by proposing a NAS method that does not require retraining based on the topology search space. Our method combines the advantages of previously proposed Hypernetwork and Kshot-NAS. We also propose a new distillation and sampling method for this new NAS architecture. We present results on NAS-Bench-201 and show that our method matches or even exceeds the baseline performance of post-search retraining.
[ "Efficient Neural Architecture Search", "AutoML" ]
https://openreview.net/pdf?id=cCEMDCQu7r
d6T68g3PTo
meta_review
1,718,562,331,266
cCEMDCQu7r
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission28/Area_Chair_WpoZ" ]
metareview: The paper introduces a retraining-free NAS method with respect to the topology search space. The approach aims to train an overlapping set of subnet as part of a larger super-network, combining Hypernetwork with Kshot-NAS and distillation. The AC agrees with the reviewers regarding the method's novelty. For the final version, the authors should focus on improving the writing and incorporate the suggestions received from the reviewers. recommendation: Accept (Poster) confidence: 5
cCEMDCQu7r
Single Train Multi Deploy on Topology Search Spaces using Kshot-Hypernet
[ "Jingyue Zhuge", "Christian Mayr", "Anand Subramoney", "David Kappel" ]
Neural Architecture Search (NAS) has long been an important research direction, to replace labor-intensive manual architecture search. Since the introduction of weight sharing in NAS, the resource and time consumption of architecture searches has been significantly reduced. In addition, variants of NAS methods have been proposed that eliminate the need for retraining by inferring model parameters directly from the shared weights after the search. However, these methods are mainly based on the MobileNet search space, which is primarily used for size searches. For the important topology search space, no NAS method has been proposed that does not require retraining. In this work, we fill this gap by proposing a NAS method that does not require retraining based on the topology search space. Our method combines the advantages of previously proposed Hypernetwork and Kshot-NAS. We also propose a new distillation and sampling method for this new NAS architecture. We present results on NAS-Bench-201 and show that our method matches or even exceeds the baseline performance of post-search retraining.
[ "Efficient Neural Architecture Search", "AutoML" ]
https://openreview.net/pdf?id=cCEMDCQu7r
J9gBpY6Qd3
official_review
1,718,271,257,385
cCEMDCQu7r
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission28/Reviewer_hpDS" ]
title: Novel training paradigm summary: This paper introduces an algorithm combining Hypernetwork and KshotNAS. English writing should be improved. strengths: Interesting combination of two existing works. Applying an adapted knowledge distillation method in sampling. weaknesses: Writing needs to be improved. Words are often missed or repeated in a sentence. In the conclusion the authors claim "... (our method) significantly enhancing the performance of all architectures within this search space." There is no supporting evidence in the paper since only the average performance is reported (unless the authors missed the word "average" in the sentence). Also, I am not fully convinced about choosing the average accuracy as the performance metrics which is not a popular option among the other approaches. Although different subnets are needed for different platforms, is it really necessary to have *all* the sampled subnets achieve good performance? Overall, the experiments lack detailed benchmarking which weakens the conclusion about the performance. confidence: 3
bpS4vaOg7q
Can LLMs Enhance Performance Prediction for Deep Learning Models?
[ "Karthick Panner Selvam", "Phitchaya Mangpo Phothilimthana", "Sami Abu-El-Haija", "Bryan Perozzi", "Mats Brorsson" ]
Accurate performance prediction of Deep Learning (DL) models is essential for efficient resource allocation and optimizations in various stages of the DL system stack. While existing approaches can achieve high prediction accuracy, they lack the ability to quickly adapt to new hardware environments or emerging workloads. This paper leverages both Graph Neural Networks (GNNs) and Large Language Models (LLMs) to enhance the accuracy and adaptability of DL performance prediction. Our intuition is that GNNs are adept at capturing the structural information of DL models, naturally represented as graphs, while LLMs provide generalization and the ability to quickly adapt to various tasks thanks to extensive pre-training data. We empirically demonstrate that using GNN-derived graph embeddings as inputs to an LLM outperforms traditional representations, including high-level text summary and lossless semi-structured text (e.g., JSON), for this task. Furthermore, we propose a structured pre-training strategy to enable model adaptation to new hardware environments, significantly reducing the need for extensive retraining. Our experiments validate the effectiveness of this approach, showing an 8.8 percentage-point improvement in accuracy over a state-of-the-art GNN baseline. Notably, when adapted to new hardware with few samples, our method achieves a remarkable 30--70 percentage-point increase in accuracy compared to the GNN baseline.
[ "Performance Model", "Graph Neural Networks", "Large Language Models" ]
https://openreview.net/pdf?id=bpS4vaOg7q
wnxD7igzJn
decision
1,718,722,721,103
bpS4vaOg7q
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Oral) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
bpS4vaOg7q
Can LLMs Enhance Performance Prediction for Deep Learning Models?
[ "Karthick Panner Selvam", "Phitchaya Mangpo Phothilimthana", "Sami Abu-El-Haija", "Bryan Perozzi", "Mats Brorsson" ]
Accurate performance prediction of Deep Learning (DL) models is essential for efficient resource allocation and optimizations in various stages of the DL system stack. While existing approaches can achieve high prediction accuracy, they lack the ability to quickly adapt to new hardware environments or emerging workloads. This paper leverages both Graph Neural Networks (GNNs) and Large Language Models (LLMs) to enhance the accuracy and adaptability of DL performance prediction. Our intuition is that GNNs are adept at capturing the structural information of DL models, naturally represented as graphs, while LLMs provide generalization and the ability to quickly adapt to various tasks thanks to extensive pre-training data. We empirically demonstrate that using GNN-derived graph embeddings as inputs to an LLM outperforms traditional representations, including high-level text summary and lossless semi-structured text (e.g., JSON), for this task. Furthermore, we propose a structured pre-training strategy to enable model adaptation to new hardware environments, significantly reducing the need for extensive retraining. Our experiments validate the effectiveness of this approach, showing an 8.8 percentage-point improvement in accuracy over a state-of-the-art GNN baseline. Notably, when adapted to new hardware with few samples, our method achieves a remarkable 30--70 percentage-point increase in accuracy compared to the GNN baseline.
[ "Performance Model", "Graph Neural Networks", "Large Language Models" ]
https://openreview.net/pdf?id=bpS4vaOg7q
enGnYLyEOv
official_review
1,718,352,812,963
bpS4vaOg7q
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission21/Reviewer_1Ajr" ]
title: Review for Can LLMs Enhance Performance Prediction for Deep Learning Models summary: The paper presents a method to leverage both GNNs and LLMs to improve the accuracy and adaptability of DL performance prediction. From empirical evaluation, this method outperforms traditional representations such as high-level text summary and lossless semi-structured text on DL performance prediction. In addition, this paper also proposed a structured pre-training strategy to enable model adaptation to new hardware environments, significantly reducing the need for extensive retraining. strengths: The methods are described clearly and the evaluation supports the conclusion weaknesses: The limitation of the method is not mentioned much. confidence: 2
bpS4vaOg7q
Can LLMs Enhance Performance Prediction for Deep Learning Models?
[ "Karthick Panner Selvam", "Phitchaya Mangpo Phothilimthana", "Sami Abu-El-Haija", "Bryan Perozzi", "Mats Brorsson" ]
Accurate performance prediction of Deep Learning (DL) models is essential for efficient resource allocation and optimizations in various stages of the DL system stack. While existing approaches can achieve high prediction accuracy, they lack the ability to quickly adapt to new hardware environments or emerging workloads. This paper leverages both Graph Neural Networks (GNNs) and Large Language Models (LLMs) to enhance the accuracy and adaptability of DL performance prediction. Our intuition is that GNNs are adept at capturing the structural information of DL models, naturally represented as graphs, while LLMs provide generalization and the ability to quickly adapt to various tasks thanks to extensive pre-training data. We empirically demonstrate that using GNN-derived graph embeddings as inputs to an LLM outperforms traditional representations, including high-level text summary and lossless semi-structured text (e.g., JSON), for this task. Furthermore, we propose a structured pre-training strategy to enable model adaptation to new hardware environments, significantly reducing the need for extensive retraining. Our experiments validate the effectiveness of this approach, showing an 8.8 percentage-point improvement in accuracy over a state-of-the-art GNN baseline. Notably, when adapted to new hardware with few samples, our method achieves a remarkable 30--70 percentage-point increase in accuracy compared to the GNN baseline.
[ "Performance Model", "Graph Neural Networks", "Large Language Models" ]
https://openreview.net/pdf?id=bpS4vaOg7q
Eap7u0bqrF
meta_review
1,718,703,815,225
bpS4vaOg7q
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission21/Area_Chair_RHPr" ]
metareview: **Strengths** - The work introduces multiple novel ideas for performance prediction, including GNN+LLM integration, pre-training strategy, and text-adaption dataset. - Paper is well structured and ideas intuitively explained. - Evaluation clearly shows advantage over prior work. **Weaknesses** - Limitation of the approach is not well discussed. This would be important to help readers understand the applicability to their scenarios. - An explanation of why baseline GNN outperforms on `fp32` datasets is needed. This could be related to the limitation issue. **Summary** - This is great work to share with the community. recommendation: Accept (Oral) confidence: 5
bpS4vaOg7q
Can LLMs Enhance Performance Prediction for Deep Learning Models?
[ "Karthick Panner Selvam", "Phitchaya Mangpo Phothilimthana", "Sami Abu-El-Haija", "Bryan Perozzi", "Mats Brorsson" ]
Accurate performance prediction of Deep Learning (DL) models is essential for efficient resource allocation and optimizations in various stages of the DL system stack. While existing approaches can achieve high prediction accuracy, they lack the ability to quickly adapt to new hardware environments or emerging workloads. This paper leverages both Graph Neural Networks (GNNs) and Large Language Models (LLMs) to enhance the accuracy and adaptability of DL performance prediction. Our intuition is that GNNs are adept at capturing the structural information of DL models, naturally represented as graphs, while LLMs provide generalization and the ability to quickly adapt to various tasks thanks to extensive pre-training data. We empirically demonstrate that using GNN-derived graph embeddings as inputs to an LLM outperforms traditional representations, including high-level text summary and lossless semi-structured text (e.g., JSON), for this task. Furthermore, we propose a structured pre-training strategy to enable model adaptation to new hardware environments, significantly reducing the need for extensive retraining. Our experiments validate the effectiveness of this approach, showing an 8.8 percentage-point improvement in accuracy over a state-of-the-art GNN baseline. Notably, when adapted to new hardware with few samples, our method achieves a remarkable 30--70 percentage-point increase in accuracy compared to the GNN baseline.
[ "Performance Model", "Graph Neural Networks", "Large Language Models" ]
https://openreview.net/pdf?id=bpS4vaOg7q
CoXDwebkmt
official_review
1,718,310,449,253
bpS4vaOg7q
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission21/Reviewer_QorJ" ]
title: Review of "Can LLMs Enhance Performance Prediction for Deep Learning Models?" paper summary: The study highlights the potential of integrating GNNs and LLMs to enhance DL model performance prediction, particularly in terms of adaptability to new environments and workloads. Current methods, though accurate, are slow to adapt to new hardware environments or new types of workloads so authors propose combining Graph Neural Networks (GNNs) and Large Language Models (LLMs) to improve prediction accuracy and adaptability. GNNs are good at capturing the structural information of DL models, which are naturally represented as graphs; LLMs provide generalization and adaptability to various tasks due to their extensive pre-training on large datasets. strengths: 1) All sections are well-structured and provide enough information 2) The research introduces a novel graph-to-text dataset designed to further research into the integration of GNNs and LLMs. This dataset is valuable for benchmarking and advancing the application of GNN-LLM combinations in graph learning tasks​. weaknesses: 1) Although the paper introduces a novel dataset for the graph-to-text adaptation, it relies heavily on the NNLQP dataset, which might limit the generalizability of the findings. Expanding the datasets to include more diverse DL models and hardware configurations could provide more robust validation. 2) The model's effectiveness is heavily dependent on the structured pre-training strategy. If this pre-training is not carefully managed or if the initial datasets are not representative enough, the model might not perform as well in real-world scenarios. 3) The experiments primarily focus on a specific set of hardware configurations and DL models. A broader range of experiments covering more diverse scenarios could strengthen the claims made in the paper . confidence: 5
bpS4vaOg7q
Can LLMs Enhance Performance Prediction for Deep Learning Models?
[ "Karthick Panner Selvam", "Phitchaya Mangpo Phothilimthana", "Sami Abu-El-Haija", "Bryan Perozzi", "Mats Brorsson" ]
Accurate performance prediction of Deep Learning (DL) models is essential for efficient resource allocation and optimizations in various stages of the DL system stack. While existing approaches can achieve high prediction accuracy, they lack the ability to quickly adapt to new hardware environments or emerging workloads. This paper leverages both Graph Neural Networks (GNNs) and Large Language Models (LLMs) to enhance the accuracy and adaptability of DL performance prediction. Our intuition is that GNNs are adept at capturing the structural information of DL models, naturally represented as graphs, while LLMs provide generalization and the ability to quickly adapt to various tasks thanks to extensive pre-training data. We empirically demonstrate that using GNN-derived graph embeddings as inputs to an LLM outperforms traditional representations, including high-level text summary and lossless semi-structured text (e.g., JSON), for this task. Furthermore, we propose a structured pre-training strategy to enable model adaptation to new hardware environments, significantly reducing the need for extensive retraining. Our experiments validate the effectiveness of this approach, showing an 8.8 percentage-point improvement in accuracy over a state-of-the-art GNN baseline. Notably, when adapted to new hardware with few samples, our method achieves a remarkable 30--70 percentage-point increase in accuracy compared to the GNN baseline.
[ "Performance Model", "Graph Neural Networks", "Large Language Models" ]
https://openreview.net/pdf?id=bpS4vaOg7q
AWwKgJQtwm
official_review
1,718,186,968,714
bpS4vaOg7q
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission21/Reviewer_QfET" ]
title: A promising approach that needs a more rigorous comparison summary: Authors propose an approach which combines GNN and LLM to predict the performance prediction for DL models. The GNN is used to get embeddings riched with information about DL model and LLM is used to predict the performance. The approach is tested against several baselines and across various datasets. strengths: 1. A novel pre-training strategy is introduced. 2. The approach is tested on a variety of datasets. 3. The approach is compared to the existing baselines. weaknesses: 1. There is no discussion why the baseline GNN outperforms the proposed approach on the datasets with suffix "fp32". (Section 5.3) 2. The proposed approach utilizes the pre-training strategy, but there is no information on the pre-training the GNN baseline. If the baseline was not pre-trained, the comparison is unfair. (Section 5.3) 3. The choice of datasets for comparison in section 5.4 is not explained. The results lack statistical significane test analysis. confidence: 3
bp8xXLi2Mp
Enhancing Fine-grained Multi-modal Alignment via Adapters: A Parameter-Efficient Training Framework for Referring Image Segmentation
[ "Zunnan Xu", "Jiaqi Huang", "Ting Liu", "Yong Liu", "Haonan Han", "Kehong Yuan", "Xiu Li" ]
In the domain of computer vision, Parameter-Efficient Training (PET) is increasingly replacing the traditional paradigm of pre-training followed by full fine-tuning. PET is particularly favored for its effectiveness in large scale models, as it streamlines transfer learning costs and optimizes hardware utilization. However, the prevailing PET methods are primarily designed for single-modal optimization without fine-grained feature extraction design. When applied to multi-modal dense prediction tasks, these methods typically do not match the performance of full fine-tuning methods that utilize more resources. In this paper, we do an investigation of efficient training problems on referring image segmentation. We introduce DenseCrossAdapter, a parameter-efficient module designed to enhance low-rank visual feature propagation by establishing dense interconnections between each layer and all preceding layers. This facilitates robust cross-modal feature interaction. We also suggest using text adapters to improve textual features. Our approach greatly surpasses state-of-the-art methods with only 0.9% to 1.8% backbone parameter updates, evaluated on challenging benchmarks.
[ "Parameter Efficient Training; Referring Image Segmentation" ]
https://openreview.net/pdf?id=bp8xXLi2Mp
qZ46GFxa4h
decision
1,718,650,266,374
bp8xXLi2Mp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
bp8xXLi2Mp
Enhancing Fine-grained Multi-modal Alignment via Adapters: A Parameter-Efficient Training Framework for Referring Image Segmentation
[ "Zunnan Xu", "Jiaqi Huang", "Ting Liu", "Yong Liu", "Haonan Han", "Kehong Yuan", "Xiu Li" ]
In the domain of computer vision, Parameter-Efficient Training (PET) is increasingly replacing the traditional paradigm of pre-training followed by full fine-tuning. PET is particularly favored for its effectiveness in large scale models, as it streamlines transfer learning costs and optimizes hardware utilization. However, the prevailing PET methods are primarily designed for single-modal optimization without fine-grained feature extraction design. When applied to multi-modal dense prediction tasks, these methods typically do not match the performance of full fine-tuning methods that utilize more resources. In this paper, we do an investigation of efficient training problems on referring image segmentation. We introduce DenseCrossAdapter, a parameter-efficient module designed to enhance low-rank visual feature propagation by establishing dense interconnections between each layer and all preceding layers. This facilitates robust cross-modal feature interaction. We also suggest using text adapters to improve textual features. Our approach greatly surpasses state-of-the-art methods with only 0.9% to 1.8% backbone parameter updates, evaluated on challenging benchmarks.
[ "Parameter Efficient Training; Referring Image Segmentation" ]
https://openreview.net/pdf?id=bp8xXLi2Mp
SymHaJ8WYD
official_review
1,718,128,519,258
bp8xXLi2Mp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission7/Reviewer_SuW5" ]
title: review summary: This study investigates the efficient training problem on referring image segmentation. It proposes DenseCrossAdapter, a parameter-efficient module designed to enhance low-rank visual feature propagation, and suggests using text adapters to improve textual features. Pros: - It uses DINO as the vision backbone and demonstrates the visual-text alignment capability of the proposed method. - It tests the effectiveness of the proposed method on three challenging benchmarks. - Systematically compare parameter-efficient training to full fine-tuning. - Comprehensive ablation study. Cons: - It will increase the inference time since the adapters cannot be merged into the model parameters. Add some inference efficiency metrics such as inference time/latency. strengths: - Systematically compare parameter-efficient training to full fine-tuning. - Comprehensive ablation study. weaknesses: - lack analysis of inference efficiency confidence: 3
bp8xXLi2Mp
Enhancing Fine-grained Multi-modal Alignment via Adapters: A Parameter-Efficient Training Framework for Referring Image Segmentation
[ "Zunnan Xu", "Jiaqi Huang", "Ting Liu", "Yong Liu", "Haonan Han", "Kehong Yuan", "Xiu Li" ]
In the domain of computer vision, Parameter-Efficient Training (PET) is increasingly replacing the traditional paradigm of pre-training followed by full fine-tuning. PET is particularly favored for its effectiveness in large scale models, as it streamlines transfer learning costs and optimizes hardware utilization. However, the prevailing PET methods are primarily designed for single-modal optimization without fine-grained feature extraction design. When applied to multi-modal dense prediction tasks, these methods typically do not match the performance of full fine-tuning methods that utilize more resources. In this paper, we do an investigation of efficient training problems on referring image segmentation. We introduce DenseCrossAdapter, a parameter-efficient module designed to enhance low-rank visual feature propagation by establishing dense interconnections between each layer and all preceding layers. This facilitates robust cross-modal feature interaction. We also suggest using text adapters to improve textual features. Our approach greatly surpasses state-of-the-art methods with only 0.9% to 1.8% backbone parameter updates, evaluated on challenging benchmarks.
[ "Parameter Efficient Training; Referring Image Segmentation" ]
https://openreview.net/pdf?id=bp8xXLi2Mp
6vsaEFbI6L
official_review
1,718,311,305,472
bp8xXLi2Mp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission7/Reviewer_Ki4k" ]
title: Using adapters to further enhance multimodal alignment for image segmentation summary: This paper proposes to use lightweight adapters for both vision and text tasks to further improvement the multimodal alignment for the referring image segmentation tasks. The design is simple that DenseCrossAdaptors and TextAdapters are added to the vision and text towers, respectively, trained together with contrastive loss. The proposed method shows superior results over the previous literature. strengths: 1. The idea of adding adapters to vision/text towers are very straightforward and easy to understand. 2. The final results are very impressive. weaknesses: It seems the design of the two adapters are different. It seems the choice of them (like why not using DenseCross adapter for text) is not discussed in the ablation. confidence: 2
bp8xXLi2Mp
Enhancing Fine-grained Multi-modal Alignment via Adapters: A Parameter-Efficient Training Framework for Referring Image Segmentation
[ "Zunnan Xu", "Jiaqi Huang", "Ting Liu", "Yong Liu", "Haonan Han", "Kehong Yuan", "Xiu Li" ]
In the domain of computer vision, Parameter-Efficient Training (PET) is increasingly replacing the traditional paradigm of pre-training followed by full fine-tuning. PET is particularly favored for its effectiveness in large scale models, as it streamlines transfer learning costs and optimizes hardware utilization. However, the prevailing PET methods are primarily designed for single-modal optimization without fine-grained feature extraction design. When applied to multi-modal dense prediction tasks, these methods typically do not match the performance of full fine-tuning methods that utilize more resources. In this paper, we do an investigation of efficient training problems on referring image segmentation. We introduce DenseCrossAdapter, a parameter-efficient module designed to enhance low-rank visual feature propagation by establishing dense interconnections between each layer and all preceding layers. This facilitates robust cross-modal feature interaction. We also suggest using text adapters to improve textual features. Our approach greatly surpasses state-of-the-art methods with only 0.9% to 1.8% backbone parameter updates, evaluated on challenging benchmarks.
[ "Parameter Efficient Training; Referring Image Segmentation" ]
https://openreview.net/pdf?id=bp8xXLi2Mp
34aaCMyQS6
meta_review
1,718,640,063,921
bp8xXLi2Mp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission7/Area_Chair_Wvmg" ]
metareview: The manuscript introduces a simple yet effective Parameter-Efficient Training method for multi-modal models, namely the multi-modal dense prediction tasks. All reviewers appreciate the impressive results. recommendation: Accept (Poster) confidence: 3
bYwg5Awx6n
Liouna: Biologically Plausible Learning for Efficient Pre-Training of Transferrable Deep Models
[ "Fady Rezk", "Antreas Antoniou", "Henry Gouk", "Timothy Hospedales" ]
Biologically plausible learning algorithms, inspired by the inherent constraints of biological neural systems, offer a promising path towards communication and memory-efficient learning with extreme parallelizability where layers learning is decoupled to train in parallel. In this work, we introduce Liouna (Arabic for "plasticity"), an unsupervised biologically plausible local learning algorithm inspired by predictive coding and masked image modelling. We derive Liouna's update rule, which elegantly reduces to a simple Hebbian rule with subtractive inhibition. We establish new state-of-the-art results for local learning rules across CIFAR-10, CIFAR-100, STL-10, and Imagenette, without imposing training procedures that hinder the attainability of the true benefits of local learning. Remarkably, we discover and demonstrate an emergent behaviour in Liouna, where it learns inter-class similarity and separability through feature sharing and specialization, despite observing no labels during training. Notably, we are the first to study the transfer performance of local learning algorithms. By pre-training on unlabelled data, Liouna outperforms previous state-of-the-art methods on 6 out of 8 downstream tasks and even surpasses end-to-end (E2E) supervised training in the low compute regime. Liouna also demonstrates competitive performance with SimCLR pre-trained models in the resource-limited pre-training scenario. This highlights Liouna's potential for efficient transfer learning and/or acceleration of the initial stages of pre-training improving its convergence rates in wall-clock time.
[ "efficiency", "pre-training", "self-supervised learning", "biological plausible learning", "local learning rules" ]
https://openreview.net/pdf?id=bYwg5Awx6n
jcUATWKFJs
meta_review
1,718,421,165,226
bYwg5Awx6n
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission49/Area_Chair_H1Ka" ]
metareview: The paper proposes a biologically plausible local learning algorithm Liouna and demonstrates how it achieves SoTA transfer learning results on image datasets. Weaknesses noted by reviewers: * some writing typos and suggestions: (1) some equations look incomplete, (3) unclear notations, (4) the structure can be re-organized, (5) explain the hyper-parameter choice * Experiments on some challenging tasks. The AC encourages the authors to thoroughly consider the feedback provided in the individual reviews and use it to enhance the manuscript. recommendation: Accept (Poster) confidence: 4
bYwg5Awx6n
Liouna: Biologically Plausible Learning for Efficient Pre-Training of Transferrable Deep Models
[ "Fady Rezk", "Antreas Antoniou", "Henry Gouk", "Timothy Hospedales" ]
Biologically plausible learning algorithms, inspired by the inherent constraints of biological neural systems, offer a promising path towards communication and memory-efficient learning with extreme parallelizability where layers learning is decoupled to train in parallel. In this work, we introduce Liouna (Arabic for "plasticity"), an unsupervised biologically plausible local learning algorithm inspired by predictive coding and masked image modelling. We derive Liouna's update rule, which elegantly reduces to a simple Hebbian rule with subtractive inhibition. We establish new state-of-the-art results for local learning rules across CIFAR-10, CIFAR-100, STL-10, and Imagenette, without imposing training procedures that hinder the attainability of the true benefits of local learning. Remarkably, we discover and demonstrate an emergent behaviour in Liouna, where it learns inter-class similarity and separability through feature sharing and specialization, despite observing no labels during training. Notably, we are the first to study the transfer performance of local learning algorithms. By pre-training on unlabelled data, Liouna outperforms previous state-of-the-art methods on 6 out of 8 downstream tasks and even surpasses end-to-end (E2E) supervised training in the low compute regime. Liouna also demonstrates competitive performance with SimCLR pre-trained models in the resource-limited pre-training scenario. This highlights Liouna's potential for efficient transfer learning and/or acceleration of the initial stages of pre-training improving its convergence rates in wall-clock time.
[ "efficiency", "pre-training", "self-supervised learning", "biological plausible learning", "local learning rules" ]
https://openreview.net/pdf?id=bYwg5Awx6n
emRBYmCRXO
decision
1,718,650,464,837
bYwg5Awx6n
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
bYwg5Awx6n
Liouna: Biologically Plausible Learning for Efficient Pre-Training of Transferrable Deep Models
[ "Fady Rezk", "Antreas Antoniou", "Henry Gouk", "Timothy Hospedales" ]
Biologically plausible learning algorithms, inspired by the inherent constraints of biological neural systems, offer a promising path towards communication and memory-efficient learning with extreme parallelizability where layers learning is decoupled to train in parallel. In this work, we introduce Liouna (Arabic for "plasticity"), an unsupervised biologically plausible local learning algorithm inspired by predictive coding and masked image modelling. We derive Liouna's update rule, which elegantly reduces to a simple Hebbian rule with subtractive inhibition. We establish new state-of-the-art results for local learning rules across CIFAR-10, CIFAR-100, STL-10, and Imagenette, without imposing training procedures that hinder the attainability of the true benefits of local learning. Remarkably, we discover and demonstrate an emergent behaviour in Liouna, where it learns inter-class similarity and separability through feature sharing and specialization, despite observing no labels during training. Notably, we are the first to study the transfer performance of local learning algorithms. By pre-training on unlabelled data, Liouna outperforms previous state-of-the-art methods on 6 out of 8 downstream tasks and even surpasses end-to-end (E2E) supervised training in the low compute regime. Liouna also demonstrates competitive performance with SimCLR pre-trained models in the resource-limited pre-training scenario. This highlights Liouna's potential for efficient transfer learning and/or acceleration of the initial stages of pre-training improving its convergence rates in wall-clock time.
[ "efficiency", "pre-training", "self-supervised learning", "biological plausible learning", "local learning rules" ]
https://openreview.net/pdf?id=bYwg5Awx6n
HHUUYvfHi2
official_review
1,718,248,170,323
bYwg5Awx6n
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission49/Reviewer_U8yv" ]
title: Self-supervised local learning rule algorithms show strong performance across a variety of direct and downstream tasks. summary: This paper proposes Liouna -- a biological-inspired local learning algorithm that self-supervises using masked input samples. Specifically, their algorithm minimizes the mean loss between the representation of standard and masked inputs, using a proximal gradient update to adjust weights as appropriate. Models trained on this method demonstrate superior performance in comparison with similar past algorithms and better scaling and transfer abilities. In addition, the authors propose four key requirements for future local learning algorithms. strengths: - The paper is written well and is easy to follow, clearly explaining most biological and algorithmic topics along the way. - The results demonstrated are strong and extensive across a variety of tasks, from direct classification to transfer learning. - The authors make a clear case for the efficacy and practicality of their algorithm, and LLRs at large. These include parallel training, efficiency improvements, and reduced memory overhead. - Tables and graphs are well-formatted and easy to follow. weaknesses: - Some equations could be better formatted. For example, $\mathbf{W}$ is never clearly defined in the line after Eq. 1, and $\mathbf{V}$ is not in Eq. 3, which makes the section as a whole harder to follow. - Despite the topic and subsequent method being interesting and novel, the paper runs relatively short, and some areas that could benefit from being expanded upon are never revisited in the main paper. I will note that the appendix contains a lot of interesting material in this regard, so perhaps it would be beneficial to move some sections there into the main paper. - Some minor inconsistencies. For example, the authors claim to "discard Softhebb from further consideration" on line 208, yet the remainder of their results significantly rely on more comparisons with the Softhebb algorithm. confidence: 3 limitations: - The papers brevity given the author's impressive work on an interesting and novel field. suggestions: Overall, this was an enjoyable, novel, and well-written paper. Some suggestions include: - I think the paper could be made a lot stronger simply by adding additional information or elaborating on claims. Some potential suggestions are: background on Softhebb, since it is used extensively during experiments; moving the hidden representation figures in Appendix F into the main paper to further enforce that Liouna shows evidence of hierarchical representations. - There are some minor typos. For example "Heirarchial" and "heirarchical" on lines 265 and 267 are misspelled. As such, it will be good to comb over the paper a final time.
bYwg5Awx6n
Liouna: Biologically Plausible Learning for Efficient Pre-Training of Transferrable Deep Models
[ "Fady Rezk", "Antreas Antoniou", "Henry Gouk", "Timothy Hospedales" ]
Biologically plausible learning algorithms, inspired by the inherent constraints of biological neural systems, offer a promising path towards communication and memory-efficient learning with extreme parallelizability where layers learning is decoupled to train in parallel. In this work, we introduce Liouna (Arabic for "plasticity"), an unsupervised biologically plausible local learning algorithm inspired by predictive coding and masked image modelling. We derive Liouna's update rule, which elegantly reduces to a simple Hebbian rule with subtractive inhibition. We establish new state-of-the-art results for local learning rules across CIFAR-10, CIFAR-100, STL-10, and Imagenette, without imposing training procedures that hinder the attainability of the true benefits of local learning. Remarkably, we discover and demonstrate an emergent behaviour in Liouna, where it learns inter-class similarity and separability through feature sharing and specialization, despite observing no labels during training. Notably, we are the first to study the transfer performance of local learning algorithms. By pre-training on unlabelled data, Liouna outperforms previous state-of-the-art methods on 6 out of 8 downstream tasks and even surpasses end-to-end (E2E) supervised training in the low compute regime. Liouna also demonstrates competitive performance with SimCLR pre-trained models in the resource-limited pre-training scenario. This highlights Liouna's potential for efficient transfer learning and/or acceleration of the initial stages of pre-training improving its convergence rates in wall-clock time.
[ "efficiency", "pre-training", "self-supervised learning", "biological plausible learning", "local learning rules" ]
https://openreview.net/pdf?id=bYwg5Awx6n
9lqCkDbfhN
official_review
1,718,369,058,468
bYwg5Awx6n
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission49/Reviewer_HfKo" ]
title: New locally learning rule summary: The paper proposes the novel local learning rule Liouna and demonstrates how it works for transfer learning. Experiments are conducted with CIFAR-10/100, STL-10, and Imagenette datasets. The proposed approach outperforms alternative LLR methods. strengths: 1. Extensive experiments and comparisons with competitors are presented 2. A study of hidden representation trained by the proposed approach is shown in Appendix 3. Simple update rule weaknesses: 1. Some equations look incomplete, e.g., in equations (2) and (3) "min" and "argmin" notation is missed. 2. Notation $W^R$ is unclear; what happens with weight matrix $W$? 3. Algorithm pseudocode is moved to the appendix; this structure complicates understanding the approach 4. Legend in Figure 1 is indistinguishable 5. The obtained accuracy is too low for the current CV networks, which are trained by the backprop algorithm - https://paperswithcode.com/task/image-classification 6. The tested architectures can be easily trained with the backprop, so why LLR is needed here is unclear. 7. The choice of the constraint for the parameters is not well-motivated. There are a lot of other constraints that can be incorporated into the proximal gradient method. confidence: 4 limitations: Only linear layers are discussed in section 2.1. suggestions: 1. Improve the quality of presentation, e.g. the motivation for local learning rules is not illustrated in the text and is not even used (e.g. parallelization and asynchronous learning) 2. Test larger models where the vanilla backprop is infeasible or slow 3. Add motivation on the selection this constraint on the parameters per layer.
YyVJctb2v4
Boolean Logic for Low-Energy Deep Learning
[ "Van Minh Nguyen", "Cristian Ocampo", "Aymen Askri", "Ba-Hien Tran" ]
Deep learning is computationally intensive. Much effort has been given to reduce the arithmetic complexity whilst energy consumption is the most relevant bottleneck, in which data movement is the dominant part. In addition, the literature focus has been on inference whereas training is several times more intense. In this paper, we make use of the Boolean neuron design and Boolean logic backpropagation to train deep models in the binary domain using Boolean logic instead of gradient descent and real arithmetic. We propose a detailed energy evaluation for both training and inference phases. Our method achieves the best results in standard image classification tasks and consumes almost 27 times less energy with our most efficient and best performing Boolean network. This energy efficiency paves the way for an edge device use, in particular for fine-tuning large models on a dedicated task. In practice, our approach outperforms the state-of-the-art semantic segmentation and shows promising image super-resolution performance.
[ "Boolean logic", "Boolean neuron", "binary network", "hardware complexity", "energy consumption" ]
https://openreview.net/pdf?id=YyVJctb2v4
XuPBvJEYUR
official_review
1,718,110,041,372
YyVJctb2v4
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission22/Reviewer_821g" ]
title: Boolean Logic for Low-Energy Deep Learning summary: This paper presents an approach to evaluate the energy in both training and inference for neural architectures using Boolean logic backpropagation. The use of Boolean logic was a central idea in studying the model's efficiency. It's interesting to see its benefits for memory energy. A few suggestions were given to improve the paper. strengths: The state of the art is well-written and the experiments for fine-tuning and super-resolution are encouraging and well-explained. weaknesses: - The paper doesn't explicitly explain the computation of energy cost for data transfer. Particularly, equation 4 (line 265) seems intuitive but not proved rigorously, also, I could not relate that to the cited paper. Maybe you can explain the intuition behind the energy cost of moving data to DRAM. I have the same remark for equation 5. - ADD-INT in section 3.2.1 (line 176) was never explained before. - References do not appear in some sections (lines 252, 739, 805). - your model doesn't show a significant gain in energy compared to baselines for large models for ImageNet classification. confidence: 3 limitations: The paper states that Boolean networks replace the complex calculations of gradients during the Backpropagation. However, replacing the classic approach requires more details about differential calculus, namely for binary models. This was used in equations 6 and 7, without explaining assumptions for backpropagation. The authors wrote a short sentence in Appendix A.2, but it doesn't explain gradient computation in the existence of residual boolean blocks. suggestions: - I'd suggest that this subsection 3.2 requires more effort to state the problem instead of using references, the related work for energy estimation can go to related work. - The number of times filters and ofmaps are reused or the frequency of access to each memory level may also be a key factor in designing a new adaptive approach.
YyVJctb2v4
Boolean Logic for Low-Energy Deep Learning
[ "Van Minh Nguyen", "Cristian Ocampo", "Aymen Askri", "Ba-Hien Tran" ]
Deep learning is computationally intensive. Much effort has been given to reduce the arithmetic complexity whilst energy consumption is the most relevant bottleneck, in which data movement is the dominant part. In addition, the literature focus has been on inference whereas training is several times more intense. In this paper, we make use of the Boolean neuron design and Boolean logic backpropagation to train deep models in the binary domain using Boolean logic instead of gradient descent and real arithmetic. We propose a detailed energy evaluation for both training and inference phases. Our method achieves the best results in standard image classification tasks and consumes almost 27 times less energy with our most efficient and best performing Boolean network. This energy efficiency paves the way for an edge device use, in particular for fine-tuning large models on a dedicated task. In practice, our approach outperforms the state-of-the-art semantic segmentation and shows promising image super-resolution performance.
[ "Boolean logic", "Boolean neuron", "binary network", "hardware complexity", "energy consumption" ]
https://openreview.net/pdf?id=YyVJctb2v4
JFn1SsUyob
official_review
1,718,140,231,750
YyVJctb2v4
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission22/Reviewer_91nC" ]
title: Promising Boolean neuron design and Boolean logic backpropagation to replace traditional gradient descent summary: The paper "Boolean Logic for Low-Energy Deep Learning" introduces an innovative approach by utilizing Boolean neuron design and Boolean logic backpropagation, replacing traditional gradient descent and real arithmetic. This method significantly reduces energy consumption, making it ideal for edge devices with limited computational resources. Comprehensive evaluation, including detailed energy estimations for both training and inference phases, reinforces the credibility of the results. The method is validated across various tasks such as image classification, fine-tuning, and image super-resolution, showcasing its versatility. Extensive supplementary materials in the appendix provide further insights into the work. However, energy savings are partially hardware-dependent, and the paper would benefit from a more thorough discussion of limitations and potential challenges in different scenarios. strengths: The introduction of Boolean neuron design and Boolean logic backpropagation presents a novel and innovative approach, challenging the traditional reliance on gradient descent and real arithmetic. The paper delivers a detailed and comprehensive evaluation of the proposed method, including an energy estimation for both the training and inference phases. This in-depth analysis reinforces the credibility of the results. The method has been validated across a variety of tasks, such as image classification, fine-tuning, and image super-resolution, showcasing its adaptability and resilience. For practical applications, especially in fine-tuning large models on edge devices with constrained computational resources, the method demonstrates promising potential, underscoring its real-world relevance. The additional materials provided in the appendix are extensive and offer further insight into the work presented. weaknesses: Energy savings are partially reliant on particular hardware architectures, like the Ascend chip architecture utilized in the experiments. Performance and energy efficiency may differ across various hardware platforms. The paper would be enhanced by a more comprehensive examination of the approach's limitations and potential challenges, including situations where it may not perform as well. confidence: 4
YyVJctb2v4
Boolean Logic for Low-Energy Deep Learning
[ "Van Minh Nguyen", "Cristian Ocampo", "Aymen Askri", "Ba-Hien Tran" ]
Deep learning is computationally intensive. Much effort has been given to reduce the arithmetic complexity whilst energy consumption is the most relevant bottleneck, in which data movement is the dominant part. In addition, the literature focus has been on inference whereas training is several times more intense. In this paper, we make use of the Boolean neuron design and Boolean logic backpropagation to train deep models in the binary domain using Boolean logic instead of gradient descent and real arithmetic. We propose a detailed energy evaluation for both training and inference phases. Our method achieves the best results in standard image classification tasks and consumes almost 27 times less energy with our most efficient and best performing Boolean network. This energy efficiency paves the way for an edge device use, in particular for fine-tuning large models on a dedicated task. In practice, our approach outperforms the state-of-the-art semantic segmentation and shows promising image super-resolution performance.
[ "Boolean logic", "Boolean neuron", "binary network", "hardware complexity", "energy consumption" ]
https://openreview.net/pdf?id=YyVJctb2v4
9XfcTZG6Kl
official_review
1,718,237,002,599
YyVJctb2v4
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission22/Reviewer_BhcX" ]
title: Interesting fully binary network for training and inference with good performance and low energy consumption summary: The paper proposes a study of binary neural networks using Boolean logic training together with optional knowledge distillation. They show that on some tasks they can achieve results close to a full-precision network with vastly reduced energy consumption. These tasks include super-resolution and segmentation, unlike previous results on BN that focus on classification. strengths: The paper is clear and well written. It is also well on topic for this workshop. The method of Boolean logic back propagation is very interesting and novel to this reviewer, although published earlier in Nguyen 2023. The results are very good and encouraging. weaknesses: It is not clear why authors focus on small networks. confidence: 4 limitations: What is the time efficiency of boolean logic back propagation?
YyVJctb2v4
Boolean Logic for Low-Energy Deep Learning
[ "Van Minh Nguyen", "Cristian Ocampo", "Aymen Askri", "Ba-Hien Tran" ]
Deep learning is computationally intensive. Much effort has been given to reduce the arithmetic complexity whilst energy consumption is the most relevant bottleneck, in which data movement is the dominant part. In addition, the literature focus has been on inference whereas training is several times more intense. In this paper, we make use of the Boolean neuron design and Boolean logic backpropagation to train deep models in the binary domain using Boolean logic instead of gradient descent and real arithmetic. We propose a detailed energy evaluation for both training and inference phases. Our method achieves the best results in standard image classification tasks and consumes almost 27 times less energy with our most efficient and best performing Boolean network. This energy efficiency paves the way for an edge device use, in particular for fine-tuning large models on a dedicated task. In practice, our approach outperforms the state-of-the-art semantic segmentation and shows promising image super-resolution performance.
[ "Boolean logic", "Boolean neuron", "binary network", "hardware complexity", "energy consumption" ]
https://openreview.net/pdf?id=YyVJctb2v4
6QbBjzu1cZ
meta_review
1,718,627,919,425
YyVJctb2v4
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission22/Area_Chair_aBMd" ]
metareview: The paper introduces a new approach for training neural networks efficiently using boolean logic, which is hardware friendly and energy efficient. Overall, it's an interesting read and a promising direction for energy savings. All 3 reviews received are generally positive. The authors should try to address the comments received by the camera ready. recommendation: Accept (Poster) confidence: 5
YyVJctb2v4
Boolean Logic for Low-Energy Deep Learning
[ "Van Minh Nguyen", "Cristian Ocampo", "Aymen Askri", "Ba-Hien Tran" ]
Deep learning is computationally intensive. Much effort has been given to reduce the arithmetic complexity whilst energy consumption is the most relevant bottleneck, in which data movement is the dominant part. In addition, the literature focus has been on inference whereas training is several times more intense. In this paper, we make use of the Boolean neuron design and Boolean logic backpropagation to train deep models in the binary domain using Boolean logic instead of gradient descent and real arithmetic. We propose a detailed energy evaluation for both training and inference phases. Our method achieves the best results in standard image classification tasks and consumes almost 27 times less energy with our most efficient and best performing Boolean network. This energy efficiency paves the way for an edge device use, in particular for fine-tuning large models on a dedicated task. In practice, our approach outperforms the state-of-the-art semantic segmentation and shows promising image super-resolution performance.
[ "Boolean logic", "Boolean neuron", "binary network", "hardware complexity", "energy consumption" ]
https://openreview.net/pdf?id=YyVJctb2v4
5bXmDZA2FG
decision
1,718,651,459,561
YyVJctb2v4
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
XUgM4M4Aua
Class-aware Initialization of Early Exits for Pre-training Large Language Models
[ "Alperen Gormez", "Erdem Koyuncu" ]
We propose a novel class-aware weight initialization technique for early exit large language models with the purpose of accelerating pre-training. Our design utilizes the neural collapse phenomenon combined with a Gaussian mixture model for the distribution of feature vectors at a given layer. Specifically, we calculate the average of token representations at the early exit point and use the resulting vectors together with class probabilities for initializing the early exit vectors. The next token prediction accuracy of our class-aware initialization technique is up to five times higher than other baselines at epoch zero and matches or surpasses them in later epochs throughout the pre-training process.
[ "early exit", "weight initialization", "efficient", "class aware", "class means", "pre-training", "LLMs" ]
https://openreview.net/pdf?id=XUgM4M4Aua
jj3yZjZTsj
official_review
1,718,580,113,268
XUgM4M4Aua
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission36/Reviewer_8ofm" ]
title: A Simple method but with some limitations summary: The paper proposes a technique to develop early exit networks from regular pre-trained base LLMs. Moreover the proposed technique is classified as a initialization technique which the authors claims to solve representation collapse during traininig. Each head is initialized with mean representation of the pre-vious blocks and the authors have provided an intuitive explanation of the efficacy of this technique. strengths: The key strengths are listed below. * The problem of developing a regular pre-trained decoder style LLM into an early exiting LLM seems important for faster generations. * The paper is written without much typos and grammatical errors and the content flow is coherent. weaknesses: Some of the key limitations are listed below. * The title and the abstract mentions pre-training but the paper uses already pre-trained models as backbones and then trains it for some epochs. Finally they add their initialization technique to make an already well trained LLM ready for early exiting. The setting looks a bit unconvincing. And this paper needs a good re-write to present the motivation and problem setting. * The datasets used here is very small and no base LLMs can be pre-trained with such datasets. * The baselines are unclear i.e. the rationale behind why these methods are used for apples apples comparision is unclear. * A major limitation in my opinion is the motivation- This paper pre-dominantly uses papyan et al. as a motivation for neural collapse. The referred paper used small scale vision models with MNIST type small datasets (which has explicitly defined classes) to study the neural collapse phenomenon. Now, the authors have not established that such a phenomenon exists in LLMs where the classes are not explicitly defined in pre-training data. It would be more convincing if the authors start with first establishing this phenomenon for LLMs that too during pre-training. * More early exiting baselines can help establishing the efficacy of this work such as- https://arxiv.org/abs/2207.07061. * More initialization baselines are required. confidence: 4 limitations: See my comments about the weakness. suggestions: See my comments about the weakness.
XUgM4M4Aua
Class-aware Initialization of Early Exits for Pre-training Large Language Models
[ "Alperen Gormez", "Erdem Koyuncu" ]
We propose a novel class-aware weight initialization technique for early exit large language models with the purpose of accelerating pre-training. Our design utilizes the neural collapse phenomenon combined with a Gaussian mixture model for the distribution of feature vectors at a given layer. Specifically, we calculate the average of token representations at the early exit point and use the resulting vectors together with class probabilities for initializing the early exit vectors. The next token prediction accuracy of our class-aware initialization technique is up to five times higher than other baselines at epoch zero and matches or surpasses them in later epochs throughout the pre-training process.
[ "early exit", "weight initialization", "efficient", "class aware", "class means", "pre-training", "LLMs" ]
https://openreview.net/pdf?id=XUgM4M4Aua
hFqhudzGe5
official_review
1,718,304,253,428
XUgM4M4Aua
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission36/Reviewer_w2eU" ]
title: Official Review summary: This paper proposes an initialization strategy of early exit layers motivated by the Vector AWGN Channel. The method is simple and easy to understand - as it involves computing the weights of a particular token as the average of all output vectors of that token from a reference corpus. Experiments show promising results when there is no training at all. However, I some concerns about evaluation settings and lack of certain analyses. strengths: 1. The method is simple in terms of implementation. 2. Motivation is easy to understand and follow. 3. It's interesting to se that the method shows reasonable performance out of the box without having to perform any training, but I have some concerns about fairness in comparison with respect to other methods (see weaknesses). weaknesses: 1. There is no analysis on downstream task performance, the next token prediction accuracy measure will not really capture those aspects, especially how the method impacts the "original" performance of the model compared to others. 2. Since the external corpus plays the main role of initialization in the proposed method, there is a lack of analysis on how the size and quality of the training corpus impacts each method. 3. It's probably not fair to compare the proposed method with the other methods at epoch 0, since it technically sees all the data of the target corpus during initialization. 4. It's interesting that Copy catches up in a single epoch in the non-frozen setting – suggesting it may only need to learn some ‘scaled’ features to be as good in the frozen setting. Hence, I wonder why the authors did not try what happens when the copy method is used, but the weights at only Decoder K=L/2 are kept trainable. confidence: 4 suggestions: 1. Why pre-train on multiple epochs of small corpus rather than single epoch but more data? (since the latter is more common practice) 2. The presentation of the paper, such as figure placements, has room for improvement. 3. Period missing in line 262 (before conclusion).
XUgM4M4Aua
Class-aware Initialization of Early Exits for Pre-training Large Language Models
[ "Alperen Gormez", "Erdem Koyuncu" ]
We propose a novel class-aware weight initialization technique for early exit large language models with the purpose of accelerating pre-training. Our design utilizes the neural collapse phenomenon combined with a Gaussian mixture model for the distribution of feature vectors at a given layer. Specifically, we calculate the average of token representations at the early exit point and use the resulting vectors together with class probabilities for initializing the early exit vectors. The next token prediction accuracy of our class-aware initialization technique is up to five times higher than other baselines at epoch zero and matches or surpasses them in later epochs throughout the pre-training process.
[ "early exit", "weight initialization", "efficient", "class aware", "class means", "pre-training", "LLMs" ]
https://openreview.net/pdf?id=XUgM4M4Aua
aP4wcPaTyp
official_review
1,718,374,442,856
XUgM4M4Aua
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission36/Reviewer_JvED" ]
title: Review of Submission #36 summary: The authors introduce a novel strategy to initialize early exit heads that speeds up pre-training compared to random initialization and classification head initialization. The authors motivate their strategy from neural collapse frameworks. Under k-class data and gaussian noise assumption, they show the optimality of their strategy. Finally, under 4 architectures and 2 pre-training settings, they showcase the utility of their proposed framework. strengths: The strength of the paper lies in its motivation and the exposition to their proposed method. The authors build on existing frameworks of neural collapse and show that a mean of token embeddings can prove to be a good initialization for early exit heads. Furthermore, they carefully initialize their biases, building up on their assumption of gaussian noise in the token embeddings. With a careful experimental study, the authors discuss the utility of their proposed framework. weaknesses: The major concern of the proposed framework is that it works better than random initialization only when the rest of the model is frozen during pre-training. The authors require convex combination of different initializations for performance gains in the non-freezing case. For future versions, the authors can provide more explanations/fixes of instability issues in some of the experiments, where the performance drops drastically after few epochs of training. Furthermore, the authors can regularize the model when they observe overfitting with training. Finally, it will be interesting to see the robustness of their method to initializing the early exit head at different layers of the model. confidence: 3
XUgM4M4Aua
Class-aware Initialization of Early Exits for Pre-training Large Language Models
[ "Alperen Gormez", "Erdem Koyuncu" ]
We propose a novel class-aware weight initialization technique for early exit large language models with the purpose of accelerating pre-training. Our design utilizes the neural collapse phenomenon combined with a Gaussian mixture model for the distribution of feature vectors at a given layer. Specifically, we calculate the average of token representations at the early exit point and use the resulting vectors together with class probabilities for initializing the early exit vectors. The next token prediction accuracy of our class-aware initialization technique is up to five times higher than other baselines at epoch zero and matches or surpasses them in later epochs throughout the pre-training process.
[ "early exit", "weight initialization", "efficient", "class aware", "class means", "pre-training", "LLMs" ]
https://openreview.net/pdf?id=XUgM4M4Aua
HNExNCbvIT
meta_review
1,718,694,576,916
XUgM4M4Aua
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission36/Area_Chair_NM9E" ]
metareview: Overall reviewer sentiment for this submission appears to be positive. Some common strengths and weaknesses pointed out by reviewers: * (+) Multiple reviewers find that the work addresses an important and relevant problem. * (+) The proposed method is relatively simple and easy to implement. * (+) Paper is well-written and easy to follow, although there's room for improvement. * (+) Reasonable experimental evaluation and zero-shot results; multiple minor issues, as listed below. * (-) Experimental settings: reviewers point out issues with the size and scale of the datasets, lack of evaluation on downstream tasks, and fairness of comparisons (eg: comparing at epoch 0). * (-) One reviewer points out that the basis for the work (neural collapse) hasn't been verified on larger networks and datasets. * (-) Baselines: reviewers note the lack of comparisons to appropriate baselines (eg: Schuster et al.) I recommend acceptance (poster), and request the authors to incorporate the changes suggested by reviewers. recommendation: Accept (Poster) confidence: 4
XUgM4M4Aua
Class-aware Initialization of Early Exits for Pre-training Large Language Models
[ "Alperen Gormez", "Erdem Koyuncu" ]
We propose a novel class-aware weight initialization technique for early exit large language models with the purpose of accelerating pre-training. Our design utilizes the neural collapse phenomenon combined with a Gaussian mixture model for the distribution of feature vectors at a given layer. Specifically, we calculate the average of token representations at the early exit point and use the resulting vectors together with class probabilities for initializing the early exit vectors. The next token prediction accuracy of our class-aware initialization technique is up to five times higher than other baselines at epoch zero and matches or surpasses them in later epochs throughout the pre-training process.
[ "early exit", "weight initialization", "efficient", "class aware", "class means", "pre-training", "LLMs" ]
https://openreview.net/pdf?id=XUgM4M4Aua
0TRHknuO7w
decision
1,718,721,512,146
XUgM4M4Aua
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
WwAgRBikCq
Communication Efficient Federated Learning with Differentiated Aggregation
[ "Peyman Gholami", "Hulya Seferoglu" ]
This paper focuses on reducing the communication cost of federated learning by exploring generalization bounds and representation learning. We first characterize a tighter generalization bound for one-round federated learning based on local clients' generalizations and heterogeneity of data distribution (non-iid scenario). We also characterize a generalization bound in R-round federated learning and its relation to the number of local updates (local stochastic gradient descents (SGDs)). Then, based on our generalization bound analysis and our representation learning interpretation of this analysis, we show for the first time that less frequent aggregations, hence more local updates, for the representation extractor (usually corresponds to initial layers) leads to the creation of more generalizable models, particularly for non-iid scenarios. We design a novel Federated Learning with Adaptive Local Steps (FedALS) algorithm based on our generalization bound and representation learning analysis. FedALS employs varying aggregation frequencies for different parts of the model, so reduces the communication cost. The paper is followed with experimental results showing the effectiveness of FedALS.
[ "Federated Learning", "Generalization Bound", "Distributed Optimization", "Communication Efficiency", "Differentiated Aggregation" ]
https://openreview.net/pdf?id=WwAgRBikCq
vaplxnkljd
decision
1,718,722,154,027
WwAgRBikCq
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
WwAgRBikCq
Communication Efficient Federated Learning with Differentiated Aggregation
[ "Peyman Gholami", "Hulya Seferoglu" ]
This paper focuses on reducing the communication cost of federated learning by exploring generalization bounds and representation learning. We first characterize a tighter generalization bound for one-round federated learning based on local clients' generalizations and heterogeneity of data distribution (non-iid scenario). We also characterize a generalization bound in R-round federated learning and its relation to the number of local updates (local stochastic gradient descents (SGDs)). Then, based on our generalization bound analysis and our representation learning interpretation of this analysis, we show for the first time that less frequent aggregations, hence more local updates, for the representation extractor (usually corresponds to initial layers) leads to the creation of more generalizable models, particularly for non-iid scenarios. We design a novel Federated Learning with Adaptive Local Steps (FedALS) algorithm based on our generalization bound and representation learning analysis. FedALS employs varying aggregation frequencies for different parts of the model, so reduces the communication cost. The paper is followed with experimental results showing the effectiveness of FedALS.
[ "Federated Learning", "Generalization Bound", "Distributed Optimization", "Communication Efficiency", "Differentiated Aggregation" ]
https://openreview.net/pdf?id=WwAgRBikCq
UQ4E0y8Lbm
official_review
1,717,883,497,172
WwAgRBikCq
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission16/Reviewer_nSs2" ]
title: Communication Efficient Federated Learning with Differentiated Aggregation summary: Below are the main contributions of the paper: a) The paper provides a more precise generalization bound for one-round federated learning. This bound is based on the local clients' generalizations and the heterogeneity of the data distribution (non-iid scenario). b) It extends the analysis to R-round federated learning, characterizing the generalization bound and its relationship to the number of local updates (local stochastic gradient descents (SGDs)). c) The analysis reveals that less frequent aggregations, resulting in more local updates for the representation extractor (typically corresponding to the initial layers), lead to the creation of more generalizable models, especially in non-iid scenarios. d) Based on the generalization bound and representation learning analysis, the paper introduces the Federated Learning with Adaptive Local Steps (FedALS) algorithm. This algorithm employs varying aggregation frequencies for different parts of the model, thereby reducing the communication cost. e) The paper concludes with experimental results demonstrating the effectiveness of the FedALS algorithm. strengths: Below is a summarization of the main strong points of the paper: - Introduces the FedALS algorithm to adapt aggregation frequencies, reducing communication overhead in federated learning. - Provides tighter generalization error bounds for both one-round and multi-round federated learning, considering data heterogeneity. - Demonstrates that less frequent aggregations for initial layers (representation extractor) lead to more generalizable models, reducing communication costs. - Develops FedALS, which varies aggregation frequencies for different model parts, enhancing efficiency and performance. - Supports theoretical contributions with experimental results showing the effectiveness of FedALS. - Addresses the challenge of non-iid data distributions, relevant for real-world federated learning applications. weaknesses: - The paper's main theoretical tool which is FedALS has to be applied to more challenging real datasets that exhibit strongly non-iid behaviour such as the Clothing 1M datasets and here is a link to the dataset (https://paperswithcode.com/dataset/clothing1m). - The related work section ignores a vast amount of related recent papers that has powerful algorithm that are very competitive to the proposed algorithm and here are few examples. The authors have to compare their results to these algorithms for fair treatment of the proposed work as it might be the case that one of these algorithms might have better performance in all aspects 1) Mishchenko, Konstantin, et al. "Proxskip: Yes! local gradient steps provably lead to communication acceleration! finally!." International Conference on Machine Learning. PMLR, 2022. 2) Yi, Kai, et al. "Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning." arXiv preprint arXiv:2406.01115 (2024). 3) Tyurin, Alexander, and Peter Richtárik. "A computation and communication efficient method for distributed nonconvex problems in the partial participation setting." Advances in Neural Information Processing Systems 36 (2024). confidence: 3 limitations: - One limitation of the proposed work is the lack of further privacy-preserving aspects in federated learning, specifically encrypting the local gradients that get uploaded to the central server while maintaining good overall performance of the learning scheme. This could be a potential future work for the authors to make the paper more comprehensive. - Another limitation is the lack of explanation regarding the cost of optimizing the number of communication rounds in terms of the overall model performance, such as those in FedAvg. This is important because every optimization of communication rounds is likely to have some technical impact on the overall model performance. suggestions: - I strongly recommend that the authors try freezing the initial layers of every local client after these layers learn the main representation of their training datasets. Only the gradients of the other layers should be learned and the first layers should be kept constant to speed up convergence. This approach may not compromise the overall accuracy of the final model.
VYfJaHeVod
Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough
[ "Konstantin Dobler", "Gerard de Melo" ]
We investigate continued pretraining of LLMs for language adaptation on a tight academic budget: a setting in which only a few GPUs can be used in parallel, for a heavily constrained duration. We focus on adapting Mistral-7B to German or Arabic and evaluate several techniques to improve efficiency and effectiveness in this setting. Our German models adapted on this tight compute budget underperform compared to the base Mistral-7B, while our Arabic models outperform several baselines, showing that for sufficiently well-represented languages, continued pretraining for specialization is not always helpful. Our main findings focus on training precision and tokenizer swapping. Our results show that pure bfloat16 training is a viable alternative to mixed-precision training, while being much faster when only using a few GPUs. Swapping the tokenizer for a specialized one yields more efficient tokenization and is competitive with the original tokenizer, which already contains some German tokens, but did not significantly increase performance for German. Code and model weights are available on GitHub.
[ "bfloat16", "efficient training", "language adaptation", "continued pretraining" ]
https://openreview.net/pdf?id=VYfJaHeVod
rLuAVI9k2O
decision
1,718,721,864,971
VYfJaHeVod
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
VYfJaHeVod
Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough
[ "Konstantin Dobler", "Gerard de Melo" ]
We investigate continued pretraining of LLMs for language adaptation on a tight academic budget: a setting in which only a few GPUs can be used in parallel, for a heavily constrained duration. We focus on adapting Mistral-7B to German or Arabic and evaluate several techniques to improve efficiency and effectiveness in this setting. Our German models adapted on this tight compute budget underperform compared to the base Mistral-7B, while our Arabic models outperform several baselines, showing that for sufficiently well-represented languages, continued pretraining for specialization is not always helpful. Our main findings focus on training precision and tokenizer swapping. Our results show that pure bfloat16 training is a viable alternative to mixed-precision training, while being much faster when only using a few GPUs. Swapping the tokenizer for a specialized one yields more efficient tokenization and is competitive with the original tokenizer, which already contains some German tokens, but did not significantly increase performance for German. Code and model weights are available on GitHub.
[ "bfloat16", "efficient training", "language adaptation", "continued pretraining" ]
https://openreview.net/pdf?id=VYfJaHeVod
SCpiZqUAsB
official_review
1,718,391,921,871
VYfJaHeVod
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission27/Reviewer_Ac9u" ]
title: Interesting work but needs more experimental evidence to make conclusions more robust summary: The authors investigated methods for adapting LLMs to different languages under constrained computational resources. The main focus is on continuing the pretraining of the Mistral-7B model for German and Arabic, evaluating techniques to enhance efficiency (swapping tokenizer, and using pure bfloat16 precision training) and performance when only a few GPUs are available. The main contributions and findings are as follows - 1. Training Precision - Pure bfloat16 training is a viable alternative to mixed-precision training, offering substantial efficiency gains without significant performance loss, particularly beneficial when limited to fewer GPUs. 2. Tokenizer Swapping: Replacing the original tokenizer with a specialized one is effective, providing efficient tokenization and maintaining performance levels. 3. Adapting Mistral-7B to German underperformed compared to the base model, while adaptation to Arabic showed significant improvement, indicating that adaptation is more beneficial for less well-represented languages. strengths: - The research addresses critical issues for academic setting on how to do continued pre-training for LLMs in resource constrained environments ( with availability of few GPUs) - The paper provides good evidence for mixed precision and bfloat16 training and provides reasonable explanation for the outcomes. - The investigation into tokenizer swapping offers a novel perspective on improving tokenization efficiency, which could be broadly applicable in LLM adaptations. - The paper is well-structured, with a clear presentation of the problem, methodology, results, and conclusions. weaknesses: - The definition of a "tight academic compute budget" is somewhat specific, focusing on server-grade GPUs like Nvidia A100s. This might not fully represent the variability in computational resources available across different academic institutions. - The finding that adaptation to well-represented languages (like German) may not always yield benefits is significant but needs further exploration to understand the underlying reasons and potential solutions. - The reduced number of training steps and tokens compared to the reference LeoLM project might limit the depth of the analysis. Exploring longer training durations, even within constrained budgets, could provide more comprehensive insights. confidence: 4 limitations: - The experiments are only performed on Mixtral-7B which lacks generalizability. There is a need to perform benchmarking on other open source models as well. - The experiments are conducted on only two languages, German and Arabic. While these languages provide valuable insights, a broader range of languages would strengthen the generalizability of the conclusions.
VYfJaHeVod
Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough
[ "Konstantin Dobler", "Gerard de Melo" ]
We investigate continued pretraining of LLMs for language adaptation on a tight academic budget: a setting in which only a few GPUs can be used in parallel, for a heavily constrained duration. We focus on adapting Mistral-7B to German or Arabic and evaluate several techniques to improve efficiency and effectiveness in this setting. Our German models adapted on this tight compute budget underperform compared to the base Mistral-7B, while our Arabic models outperform several baselines, showing that for sufficiently well-represented languages, continued pretraining for specialization is not always helpful. Our main findings focus on training precision and tokenizer swapping. Our results show that pure bfloat16 training is a viable alternative to mixed-precision training, while being much faster when only using a few GPUs. Swapping the tokenizer for a specialized one yields more efficient tokenization and is competitive with the original tokenizer, which already contains some German tokens, but did not significantly increase performance for German. Code and model weights are available on GitHub.
[ "bfloat16", "efficient training", "language adaptation", "continued pretraining" ]
https://openreview.net/pdf?id=VYfJaHeVod
4bvDMTwgo0
official_review
1,718,222,276,478
VYfJaHeVod
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission27/Reviewer_4wL9" ]
title: This paper is somewhat insightful in using bf16 for continued-pretraining summary: Interesting results with good analysis. Paper writing can be improved and larger scale experiments can make the results even more convincing. strengths: Overall good quality of the analysis and experiments. Analysis of the regularization effect of bf16 optimization is interesting. It is quite interesting to see that the regularization effect of using a lower precision update would cause the layer-based difference. I am convinced that using bf16 update leads to a similar effect as layer-frozen (or adapted learning rate), even though it is not clear whether its effect on other layers is also important to the performance. weaknesses: It would be nice to see the performance if RMSNorm layers are frozen over the entire training session. Page 2 line 55 "mixed-precision bfloat16 training will run out of memory or is only possible if used with inefficient memory-saving techniques like activation checkpointing". There are multiple solutions to reduce the memory peak (for example, the paged optimizer from QLoRA) and they may not cause too much time overhead as gradient checkpointing. If I understand correctly, the speed-up of using pure bf-16 is mostly coming from not applying the gradient checkpointing. Other memory-saving techniques should be compared to reach a more convincing conclusion. There are multiple compiled errors with ? in Page 4 line 205, Page 6 lines 294-296, Page 8 line 421 confidence: 4
UHtBr3F6qd
MoReDrop: Dropout without Dropping
[ "Li Jiang", "Duo Li", "Yichuan Ding", "Xue Liu", "Victor Wai Kin Chan" ]
Dropout is a widely adopted technique that significantly improves the generalization of deep neural networks in various domains. However, the discrepancy in model configurations between the training and evaluation phases introduces a significant challenge: the model distributional shift. In this study, we introduce an innovative approach termed Model Regularization for Dropout (MoReDrop). MoReDrop actively updates solely the dense model during training, targeting its loss function optimization and thus eliminating the primary source of distributional shift. To further leverage the benefits of dropout, we introduce a regularizer derived from the output divergence of the dense and its dropout models. Importantly, sub-models receive passive updates owing to their shared attributes with the dense model. To reduce computational demands, we introduce a streamlined variant of MoReDrop, referred to as MoReDropL, which utilizes dropout exclusively in the final layer. Our experiments, conducted on several benchmarks across multiple domains, consistently demonstrate the scalability, efficiency, and robustness of our proposed algorithms.
[ "Deep Learning", "Dropout", "Scalbility", "Model Distributional Shift", "Regularization" ]
https://openreview.net/pdf?id=UHtBr3F6qd
MayQB91e9b
official_review
1,717,763,974,888
UHtBr3F6qd
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission6/Reviewer_e1EW" ]
title: Review of "MoReDrop: Dropout without Dropping" summary: The paper proposes a regularisation method for machine learning methods that forces the model weights to be similar to those obtained when using dropout. It is thus a regularisation alternative to dropout. I like the idea, and the results look good, but I am missing some details, some clarity, and some comparisons. strengths: I think the method is well-presented and the idea clear. The results appear favourable for the proposed method. weaknesses: There are quite some things that could be made more clear in the paper, or explained better. For instance: - Already in the abstract, you start to talk about "the dense model", without explaining/defining what this means in this context. - You talk about distribution shift already in the abstract without explaining/defining what this means in this context. This should be explained early on, because you return to this in the introduction without explanation. Perhaps even illustrate the distribution shift, so that the problem becomes clear. - The abstract talk about divergence between the dense and dropout models, but (KL) divergence is what R-Drop used, you specifically propose something else. - You say that the other methods fail to prevent distribution shift because they apply "the expectation operator during evaluation", but no explanation/proof/illustration is presented. Of course in practice these other methods implement the expectation using stochastic samples, but in the limit the results should be the same (I would assume). - Around line 40, right column: You say that distances are computed in the other method, but never (until later) say what those distances are between. - What do you mean on line 89 when you say the regularised "functions as a toolkit"? Major comments: - Line 108, right: The "dropout brings inconsistency" is note clear to me. - Line 135, left: Not clear what the significant difference would be. - Line 127, right: Should be 0 to L, right? - All: l is both a layer index and a loss function. Also, l is defined in two ways as a loss function. Be consistent and avoid ambiguities. - Line 142, right: Not setting weights, but setting activations to zero. - Line 148, right: The h is first a vector, and then as a function. - Line 155, right: The transform function has an index, but the weights in it does not. - Equation 1: You say that the expected H is the output of the ensemble, but previous work has concluded that the ensemble approximates a geometric mean over the subnetworks (see e.g., https://papers.nips.cc/paper_files/paper/2013/hash/71f6278d140af599e06ad9bf1ba03cb0-Abstract.html, https://arxiv.org/abs/1312.6197, or https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3996711/). - Equation 1: You say that the expected dropout mask is the one used for the "dense network", but that would be a "mask" with all elements being 1-p. The input there should instead be a mask with all ones, since no elements are removed. - Don't use colon (:) before equations. Make the equations part of the sentences. - Equations 2 and 3: The left hand side doesn't use the index i. - The scaled version of the logistic sigmoid looks like just the tanh? But not quite, since it seems like 1 minus the numerator. Something's off there. Please double-check that you got it right. - I assume the regularisation term, R, is a function of \theta? This should be made clear in both Equations 4 and 5. - Line 242: There's not x and y in the statement. - It seems strange to me that you rely in the expectation of the regulariser to be positive, and then minimise that. That will probably work well, and seems to work well, but why not formulate a regulariser that immediately does what you actually want it to do? Instead of allowing, as you say, undesirable solutions, in the first place? The squared distance seems more reasonable, but that's fraternal dropout then. Please motivate your choice clearly. It does not look at a first glance that it matches any criterion, but there's a connection between the sigmoid of the expected ensemble output and the geometric mean. Perhaps it is that you actually makes the geometric means agree? See the links above. - The experiments seem non-exhaustive to me. I would like to see a clear comparison to at least standard dropout, R-Drop, and your method for all datasets. - Explain somewhere what the different font mean in the tables. - If I understand correctly (please explain this clearly), the standard errors come from running the same experiments multiple times with different initial weights? Note that this does not capture any aleatoric uncertainty stemming from the data, which is usually what you would want to present in maximum likelihood learning. You probably want to do some kind of resampling, or capture the uncertainty in the mean scores that you present. - Table 4: Why four significant digits here? Do remove two so that you have two here as well. - It will not be a fair comparison if you do a grid search for your method and use literature values for the competing methods. You need to do the same type of grid searches for all methods. - It seems that in most cases, the proposed method is added on top of R-Drop, or did I misunderstand this? If so, you cannot say that it is your method that is better. If so, do run the baseline method without dropout and without R-Drop, and only with your method, and then compare to using only dropout and using only R-Drop. - How were other hyper-parameters selected, such as the number of epochs? - Line 446, left: Do you mean re-training/fine-tuning? - Line 544: Explain the step where you removed the product. Also, note that you have two index i. Use another index for one of them. - Do add uncertainties (standard errors) also to the table sin the appendix. - Note that most of your results in the tables actually doesn't show significant differences between the different methods. You can only claim that your method actually works better in the cases when you have a significant difference. Minor comments: - Line 18, right column: Dropout always uses Bernoulli distributed masks, not in general. Other versions exist, such as Gaussian Dropout, but that's another method with other properties. - Line 27, right column: What do you mean by assembling sub-models? - Page 1, footnote: I don't understand it. Please clarify or remove if not critical. - Line 96, right: Sentence messed up. - Line 127, left: "the on". - Line 130, left: "firstly find". - Line 123, right: "respective ... respectively". - All: M_i is all in bold, so the index i is also bold. - After Equation 1: Saying RHS and LHS of the minus is a but contrived. Perhaps just say the first and second terms on the left hand side? - Line 190, right: Comma at the end of the equation, should be full stop. - Line 432, right: No space before RTE. confidence: 4 limitations: The experiments need to be clarified and probably extended. Also, the motivation for the regulariser is not clear. See my more extensive comments under weaknesses. suggestions: See my comments under weaknesses.
UHtBr3F6qd
MoReDrop: Dropout without Dropping
[ "Li Jiang", "Duo Li", "Yichuan Ding", "Xue Liu", "Victor Wai Kin Chan" ]
Dropout is a widely adopted technique that significantly improves the generalization of deep neural networks in various domains. However, the discrepancy in model configurations between the training and evaluation phases introduces a significant challenge: the model distributional shift. In this study, we introduce an innovative approach termed Model Regularization for Dropout (MoReDrop). MoReDrop actively updates solely the dense model during training, targeting its loss function optimization and thus eliminating the primary source of distributional shift. To further leverage the benefits of dropout, we introduce a regularizer derived from the output divergence of the dense and its dropout models. Importantly, sub-models receive passive updates owing to their shared attributes with the dense model. To reduce computational demands, we introduce a streamlined variant of MoReDrop, referred to as MoReDropL, which utilizes dropout exclusively in the final layer. Our experiments, conducted on several benchmarks across multiple domains, consistently demonstrate the scalability, efficiency, and robustness of our proposed algorithms.
[ "Deep Learning", "Dropout", "Scalbility", "Model Distributional Shift", "Regularization" ]
https://openreview.net/pdf?id=UHtBr3F6qd
MVOehWYNZT
official_review
1,718,117,533,460
UHtBr3F6qd
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission6/Reviewer_2BPN" ]
title: review summary: This study investigates the problem of model distributional shift between training and evaluation stages when using Dropout, and proposes a new approach, MoReDrop, which solely updates the dense model parameters and targets for model consistency applied throughout both the training and inference stages. The authors test the effectiveness of MoReDrop on various models and tasks. Pros: - The paper supports its contributions with extensive experimental evaluations across many benchmarks. - The idea is simple and easy to understand - The idea is novel Cons: - The performance improvement is marginal while it takes longer training time. - Lack more baselines to valid the advantage of the propose method. strengths: - idea is simple and straightforward - extensive evaluation on multiple settings weaknesses: - lack important baselines confidence: 3
UHtBr3F6qd
MoReDrop: Dropout without Dropping
[ "Li Jiang", "Duo Li", "Yichuan Ding", "Xue Liu", "Victor Wai Kin Chan" ]
Dropout is a widely adopted technique that significantly improves the generalization of deep neural networks in various domains. However, the discrepancy in model configurations between the training and evaluation phases introduces a significant challenge: the model distributional shift. In this study, we introduce an innovative approach termed Model Regularization for Dropout (MoReDrop). MoReDrop actively updates solely the dense model during training, targeting its loss function optimization and thus eliminating the primary source of distributional shift. To further leverage the benefits of dropout, we introduce a regularizer derived from the output divergence of the dense and its dropout models. Importantly, sub-models receive passive updates owing to their shared attributes with the dense model. To reduce computational demands, we introduce a streamlined variant of MoReDrop, referred to as MoReDropL, which utilizes dropout exclusively in the final layer. Our experiments, conducted on several benchmarks across multiple domains, consistently demonstrate the scalability, efficiency, and robustness of our proposed algorithms.
[ "Deep Learning", "Dropout", "Scalbility", "Model Distributional Shift", "Regularization" ]
https://openreview.net/pdf?id=UHtBr3F6qd
LKjtJAjAcP
decision
1,718,721,776,442
UHtBr3F6qd
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
UHtBr3F6qd
MoReDrop: Dropout without Dropping
[ "Li Jiang", "Duo Li", "Yichuan Ding", "Xue Liu", "Victor Wai Kin Chan" ]
Dropout is a widely adopted technique that significantly improves the generalization of deep neural networks in various domains. However, the discrepancy in model configurations between the training and evaluation phases introduces a significant challenge: the model distributional shift. In this study, we introduce an innovative approach termed Model Regularization for Dropout (MoReDrop). MoReDrop actively updates solely the dense model during training, targeting its loss function optimization and thus eliminating the primary source of distributional shift. To further leverage the benefits of dropout, we introduce a regularizer derived from the output divergence of the dense and its dropout models. Importantly, sub-models receive passive updates owing to their shared attributes with the dense model. To reduce computational demands, we introduce a streamlined variant of MoReDrop, referred to as MoReDropL, which utilizes dropout exclusively in the final layer. Our experiments, conducted on several benchmarks across multiple domains, consistently demonstrate the scalability, efficiency, and robustness of our proposed algorithms.
[ "Deep Learning", "Dropout", "Scalbility", "Model Distributional Shift", "Regularization" ]
https://openreview.net/pdf?id=UHtBr3F6qd
A7I5oQGbgm
meta_review
1,718,703,058,762
UHtBr3F6qd
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission6/Area_Chair_7Lsb" ]
metareview: **Strengths** - Updating the dense model and incorporating dropout regularization through loss function are interesting differentiations from prior work. - The proposed idea is intuitive, easy to implement, and promising. - The evaluation is fairly extensive and includes a decent explanation of the results.    **Weaknesses** - Evaluation is missing important baselines - The writing could be improved in multiple places, as detailed by the reviewers. **Summary*** I think novelty of the idea and the decent evaluation makes this an interesting work for the community. recommendation: Accept (Oral) confidence: 4
UHtBr3F6qd
MoReDrop: Dropout without Dropping
[ "Li Jiang", "Duo Li", "Yichuan Ding", "Xue Liu", "Victor Wai Kin Chan" ]
Dropout is a widely adopted technique that significantly improves the generalization of deep neural networks in various domains. However, the discrepancy in model configurations between the training and evaluation phases introduces a significant challenge: the model distributional shift. In this study, we introduce an innovative approach termed Model Regularization for Dropout (MoReDrop). MoReDrop actively updates solely the dense model during training, targeting its loss function optimization and thus eliminating the primary source of distributional shift. To further leverage the benefits of dropout, we introduce a regularizer derived from the output divergence of the dense and its dropout models. Importantly, sub-models receive passive updates owing to their shared attributes with the dense model. To reduce computational demands, we introduce a streamlined variant of MoReDrop, referred to as MoReDropL, which utilizes dropout exclusively in the final layer. Our experiments, conducted on several benchmarks across multiple domains, consistently demonstrate the scalability, efficiency, and robustness of our proposed algorithms.
[ "Deep Learning", "Dropout", "Scalbility", "Model Distributional Shift", "Regularization" ]
https://openreview.net/pdf?id=UHtBr3F6qd
2yZW9HNHEy
official_review
1,718,013,477,977
UHtBr3F6qd
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission6/Reviewer_LamD" ]
title: Official Review summary: The paper proposes an alternative (MoReDrop ) to dropout, as a way to mitigate the distributional shift between training and inference. In their method, the loss is calculated with and without dropout (model loss and sub-model loss), and the weight update is performed in relation to both. A lighter approach (MoReDropL), that only performs single forward-propagation, is also introduced. The experimental results presented in the paper show improvement for a wide range of tasks, including text (BERT family), image classification (ViT-B/16 and Resnet-18, CIFAR / Imagenet), and CIFAR-10 image generation. strengths: * Targets dropout, an important cornerstone of the model-training * Good Empirical results, on a large variety of tasks * Feasible, easy-to-implement method. Computational costs are discussed and acknowledged. weaknesses: * I found Section 4 (the main section describing the method) hard to follow and would suggest rewriting 4.1 and 4.2. The description of MoReDrop is unnecessarily messy: $\mathcal{R}$ is suddenly introduced without explaining that it is a regularization term and the discussion of KL/L2 divergence is poorly connected to the rest of the section. There are more cases like this, which makes it harder to understand the method (e.g. the line starting with "Unlike conventional methods") * The main motivation for using the new method is the distributional shift of dropout, and the paper does a decent job describing it. However, I didn't see an argument for why distributional shift is a problem. To me, it makes sense that regularization methods (Dropout, as well as data augmentation or mixup) would cause a distributional shift: after all, a common interpretation of these methods is that they try to make training "harder". It is common for models to perform worse during training for the same reason. I also remember previous works about dropout [1] claiming that using an ensemble of dropout submodels results in worse results than what you get with standard dropout, despite having seemingly lower distributional shift. * In the case of BeRT, there was a large HP scan, but the details are not entirely clear to me, so I am not sure whether the comparison is fair. Other experiments seem to have more robust hyper-parameters, and an ablation study is included in the supplementary. [1] The Implicit and Explicit Regularization Effects of Dropout: https://arxiv.org/abs/2002.12915 confidence: 4 suggestions: 1. Rewrite 4.1 and 4.2 2. In Algorithm 1, you first run the dense model, followed by the sub-model. This will result in higher peak memory since when running the sub-model, you must also keep the activations of the dense model in memory (you will perform back-propagation later). This memory can be saved by simply running the sub-model first-- it runs on ``detach'' mode and we don't need to keep anything except the loss in the end. 3. Any changes that would strengthen the connection between the theoretical reasoning (distributional shift) and the empirical result would be helpful.
SeBVP0zxKp
ECO: Efficient Computational Optimization for Exact Machine Unlearning in Deep Neural Networks
[ "Yu-Ting Huang", "Pei-Yuan Wu", "Chuan-Ju Wang" ]
This paper introduces ECO, an efficient computational optimization framework that adapts the CP algorithm—originally proposed by Cauwenberghs & Poggio (2000)—for exact unlearning within deep neural network (DNN) models. ECO utilizes a single model architecture that integrates a DNN-based feature transformation function with the CP algorithm, facilitating precise data removal without necessitating full model retraining. We demonstrate that ECO not only boosts efficiency but also maintains the performance of the original base DNN model, and surprisingly, it even surpasses naive retraining in effectiveness. Crucially, we are the first to adapt the CP algorithm’s decremental learning for leave-one-out evaluation to achieve exact unlearning in DNN models by fully removing a specific data instance's influence. We plan to open-source our implementation to promote further research in this field.
[ "Machine Unlearning", "Parametric Programming", "CP Algorithm", "KKT Condition", "Dual Support Vector Machine" ]
https://openreview.net/pdf?id=SeBVP0zxKp
uTeWjhbXih
official_review
1,718,222,285,388
SeBVP0zxKp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission11/Reviewer_Hv3o" ]
title: CP algorithm with DNNs for exact machine unlearning summary: This paper integrated CP algorithm with DNNs for exact machine unlearning. It shows a large improvement in unlearning efficency but I am not sure if this paper matches the topic of the workshop. strengths: Innovative method on improving the machine unlearning method. Well-explained algorithm and clear analysis. weaknesses: Lack of model details of DNN models used for the experiments. Only compared the performance with naive retaining. Are there any other approaches to compare with? Only the difference between MIA scores on the test set and the forget set is analyzed. What can we learn from the absolute values that reflect the percentage of samples predicted as non-training examples? confidence: 2
SeBVP0zxKp
ECO: Efficient Computational Optimization for Exact Machine Unlearning in Deep Neural Networks
[ "Yu-Ting Huang", "Pei-Yuan Wu", "Chuan-Ju Wang" ]
This paper introduces ECO, an efficient computational optimization framework that adapts the CP algorithm—originally proposed by Cauwenberghs & Poggio (2000)—for exact unlearning within deep neural network (DNN) models. ECO utilizes a single model architecture that integrates a DNN-based feature transformation function with the CP algorithm, facilitating precise data removal without necessitating full model retraining. We demonstrate that ECO not only boosts efficiency but also maintains the performance of the original base DNN model, and surprisingly, it even surpasses naive retraining in effectiveness. Crucially, we are the first to adapt the CP algorithm’s decremental learning for leave-one-out evaluation to achieve exact unlearning in DNN models by fully removing a specific data instance's influence. We plan to open-source our implementation to promote further research in this field.
[ "Machine Unlearning", "Parametric Programming", "CP Algorithm", "KKT Condition", "Dual Support Vector Machine" ]
https://openreview.net/pdf?id=SeBVP0zxKp
uERtfQUC8J
official_review
1,718,247,085,514
SeBVP0zxKp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission11/Reviewer_1Q4v" ]
title: Good insight and approach statement with somehow inadequate experimental results summary: The paper presents an approach, ECO, to unlearn exact data points from the training set while mitigating model utility loss. To validate their approach's effectiveness, they evaluate their approach and compare it with other existing approaches. strengths: 1. The paper describes their approach clearly. 2. The paper shows the evolutions under various scenarios, making it more convincing. weaknesses: 1. The choice of Datasets for evaluation: machine unlearning aims to avoid the cost of training from scratch when the training cost is too high on large models and datasets. However, the dataset and model used in the paper are tiny, leading to unimpressive results during comparison. confidence: 3 suggestions: 1. More datasets to show the disparity between approaches.
SeBVP0zxKp
ECO: Efficient Computational Optimization for Exact Machine Unlearning in Deep Neural Networks
[ "Yu-Ting Huang", "Pei-Yuan Wu", "Chuan-Ju Wang" ]
This paper introduces ECO, an efficient computational optimization framework that adapts the CP algorithm—originally proposed by Cauwenberghs & Poggio (2000)—for exact unlearning within deep neural network (DNN) models. ECO utilizes a single model architecture that integrates a DNN-based feature transformation function with the CP algorithm, facilitating precise data removal without necessitating full model retraining. We demonstrate that ECO not only boosts efficiency but also maintains the performance of the original base DNN model, and surprisingly, it even surpasses naive retraining in effectiveness. Crucially, we are the first to adapt the CP algorithm’s decremental learning for leave-one-out evaluation to achieve exact unlearning in DNN models by fully removing a specific data instance's influence. We plan to open-source our implementation to promote further research in this field.
[ "Machine Unlearning", "Parametric Programming", "CP Algorithm", "KKT Condition", "Dual Support Vector Machine" ]
https://openreview.net/pdf?id=SeBVP0zxKp
RQ75su6dxN
meta_review
1,718,638,907,842
SeBVP0zxKp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission11/Area_Chair_koCt" ]
metareview: The manuscript extends the CP algorithm to the field of machine unlearning and achieves efficient and effective unlearning via a single model. Though all reviewers acknowledge the paper's writing quality and methodology novelty, they criticize the empirical evaluation issues due to the limited/small-scale neural architecture, evaluation datasets, and baselines. The AC would suggest authors to further justify the effectiveness of the proposed method via extensive evaluations. recommendation: Accept (Poster) confidence: 3
SeBVP0zxKp
ECO: Efficient Computational Optimization for Exact Machine Unlearning in Deep Neural Networks
[ "Yu-Ting Huang", "Pei-Yuan Wu", "Chuan-Ju Wang" ]
This paper introduces ECO, an efficient computational optimization framework that adapts the CP algorithm—originally proposed by Cauwenberghs & Poggio (2000)—for exact unlearning within deep neural network (DNN) models. ECO utilizes a single model architecture that integrates a DNN-based feature transformation function with the CP algorithm, facilitating precise data removal without necessitating full model retraining. We demonstrate that ECO not only boosts efficiency but also maintains the performance of the original base DNN model, and surprisingly, it even surpasses naive retraining in effectiveness. Crucially, we are the first to adapt the CP algorithm’s decremental learning for leave-one-out evaluation to achieve exact unlearning in DNN models by fully removing a specific data instance's influence. We plan to open-source our implementation to promote further research in this field.
[ "Machine Unlearning", "Parametric Programming", "CP Algorithm", "KKT Condition", "Dual Support Vector Machine" ]
https://openreview.net/pdf?id=SeBVP0zxKp
BxoLed7trF
decision
1,718,650,335,581
SeBVP0zxKp
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
RwdyPAK2xD
DrJAX: Scalable and Differentiable MapReduce Primitives in JAX
[ "J Keith Rush", "Zachary Charles", "Zachary Garrett", "Sean Augenstein", "Nicole Elyse Mitchell" ]
We present DrJAX, a JAX-based library designed to support large-scale distributed and parallel machine learning algorithms that use MapReduce-style operations. DrJAX leverages JAX's sharding mechanisms to enable native targeting of TPUs and state-of-the-art JAX runtimes, including Pathways. DrJAX embeds building blocks for MapReduce computations as primitives in JAX. This enables three key benefits. First, DrJAX computations can be translated directly to XLA HLO, enabling flexible integration with a wide array of ML training platforms. Second, DrJAX computations are fully differentiable. Last, DrJAX computations can be interpreted out to existing batch-processing compute systems, including traditional MapReduce systems like Apache Beam and cross-device compute systems like those powering federated learning applications. We show that DrJAX provides an easily programmable, performant, and scalable framework for parallelized algorithm development.
[ "parallel machine learning", "distributed machine learning", "software", "jax", "mapreduce", "federated learning" ]
https://openreview.net/pdf?id=RwdyPAK2xD
lLAz8cEgNf
official_review
1,718,192,874,589
RwdyPAK2xD
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission31/Reviewer_2iKU" ]
title: Specialised Primitives for Distributed Training in JAX summary: DrJAX is a library that adds a new way of distributing models and data over many workers, for instead in a datacentre setting. It does this by introducing a few new operations in the form of broadcast, map and reduce, which are implemented as JAX primitives that internally handle sharding and communication of data among workers. It also introduces a concept of partitioned values, where data known to be independent is made explicit. This would allow DrJAX to efficiently divide it among available compute resources. DrJAX in the paper is demonstrated using code examples and compared in experiments to approaches which rely on solely on compiler optimisations. strengths: * The library has a clean integration with JAX and the associated XLA compiler stack, since it is implemented using custom JAX primitives. * DrJAX appears to have a very simple intuitive API matching existing paradigms using map and reduce, allowing the user to scale a JAX program with a small amount of code modification. * Significantly simpler than alternatives such as using JAX's sharding API. * The task of sharding data and models in a distributed system is an important and often complex issue. DrJAX provides another tool for doing this, while importantly abstracting many implementation details, such as the specific layout of data across devices. * Based on experiments, it offers very effective parallelisation out of the box and without much tweaking. * It appears highly extensible. weaknesses: * DrJAX was compared with a naive Pythonic approach and a parallelizing compiler (GSPMD), but not directly compared to handwritten JAX code, where sharding and parallelisation was taken into account. * DrJAX does not seem to have been adopted yet in any real world applications, or at least this was not mentioned if so. * The library itself _is_ very simple, consisting of only three operations. Furthermore, it is used in this paper to perform tasks (for-loop replacement, averaging loss, tensor sharding) that do not seem exceedingly difficult to implement without the use of an additional library. The advantages of using DrJAX here could be explained further, or more complex use cases could be presented. * The code used for experiments is not present, for instance the code that was given to the GSPMD compiler in section 4. * Could include further discussion and comparison with alternatives (for scaling JAX applications), or if such tools and libraries exist that the authors know of. confidence: 5
RwdyPAK2xD
DrJAX: Scalable and Differentiable MapReduce Primitives in JAX
[ "J Keith Rush", "Zachary Charles", "Zachary Garrett", "Sean Augenstein", "Nicole Elyse Mitchell" ]
We present DrJAX, a JAX-based library designed to support large-scale distributed and parallel machine learning algorithms that use MapReduce-style operations. DrJAX leverages JAX's sharding mechanisms to enable native targeting of TPUs and state-of-the-art JAX runtimes, including Pathways. DrJAX embeds building blocks for MapReduce computations as primitives in JAX. This enables three key benefits. First, DrJAX computations can be translated directly to XLA HLO, enabling flexible integration with a wide array of ML training platforms. Second, DrJAX computations are fully differentiable. Last, DrJAX computations can be interpreted out to existing batch-processing compute systems, including traditional MapReduce systems like Apache Beam and cross-device compute systems like those powering federated learning applications. We show that DrJAX provides an easily programmable, performant, and scalable framework for parallelized algorithm development.
[ "parallel machine learning", "distributed machine learning", "software", "jax", "mapreduce", "federated learning" ]
https://openreview.net/pdf?id=RwdyPAK2xD
evglxSx8O9
official_review
1,718,381,827,409
RwdyPAK2xD
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission31/Reviewer_cAwu" ]
title: A simple set of JAX primitives to specify distributed computation, definitely worthy of discussion at the workshop. summary: Although there are many ways to specify distributed ML problems, there is room to do better. This paper makes a good claim to have done so. The paper proposes a simple set of JAX primitives (broadcast, map, reduce_sum) to specify distributed ML computations. Examples show how these primitives can indeed express several interesting computations, and it would be valuable to have this discussion at the workshop. strengths: When reviewing this paper, the fundamental question I want to answer is "Will the workshop attendees want to use this system?" and to get a picture of the answer, I ask "Will I use this? Will my colleagues use this?". I believe the answer is that we would most likely give it a go, which is certainly enough to recommend acceptance. At the same time, there are some design decisions which trouble me. It is quite clear that any distributed system ultimately needs to know about the nodes on which the computation is to be run. Let us call such a description of the nodes a "mesh", roughly following existing nomenclature. The first code snippet ``` def broadcast_double_and_sum(x): y = drjax.broadcast(x) z = drjax.map_fn(lambda a: 2*a, y) return drjax.reduce_sum(z) ``` Immediately has a non-functional vibe that is in stark contrast to the functional ethos of JAX. What is "drjax"? Presumably a module name, so whatever the description of the mesh is, it is not explicit in this listing. The reader of this code immediately wonders where the global variables are hidden. This does not make it easier to understand the value of the offering, it makes it harder. I literally cannot read this code and determine what value it computes. OK, so we quickly move to a version where the mesh is specified, and we see that only the size is really important - great, a nice property, but why hide this important information in a parse-time argument to a decorator? ``` @drjax.program(partition_size=3) def broadcast_double_and_sum(x): y = drjax.broadcast(x) z = drjax.map_fn(lambda a: 2*a, y) return drjax.reduce_sum(z) ``` I believe I would much prefer to use the package in this way: ``` def broadcast_double_and_sum(mesh, x): y = drjax.broadcast(mesh, x) z = drjax.map_fn(mesh, lambda a: 2*a, y) return drjax.reduce_sum(mesh, z) ``` Now I can clearly see that the return value will depend on the "mesh" object; if I look inside, I may notice that its value depends only on `mesh.partition_size`; so I can reason locally about the code's behaviour. Yes, this means that the mesh must be plumbed down through the program; but this is always the tradeoff in pure functional programming - either store a value in a global variable and lose composability, or explicitly plumb it through, just like JAX's random number keys. ``` def broadcast_double_and_sum(djmesh, x): y = djmesh.broadcast(x) z = djmesh.map_fn(lambda a: 2*a, y) return djmesh.reduce_sum(z) ``` Now, I get it: you're telling me what you did, and I'm saying "oh I would have done it differently". Of course that's not a reason to reject the paper, but I am explaining why the paper seems to me to have deficiencies (alluded to in future work, because extension to "hierarchically partitioned data" is very likely going to require a redesign). weaknesses: The paper should be much more straightforward about what currently exists. Saying "These software frameworks generally focus on enabling parallelism for their most common use case: computation of a function’s derivative across a batch of inputs." or "an algorithm author who wishes to program over partitioned data in a parallel manner finds themselves in an awkward position" is disingenuous - yes, those easy cases are made easy in most packages, but the implication is somehow that the harder cases are not possible. This, as the paper makes clear later is not true. Competing techniques (e.g. `jax.shard_map`) are dismissed without direct and fair comparison. Instead of saying "Underlying ML frameworks often offer powerful parallelism primitives (e.g. `jax.shard_map`), but typically target the model developer rather than the algorithm developer" This is not a valid distinction - DrJax solves very similar problems to `shard_map`, with a different interface - so the above sentence meaninglessly suggests that `shard_map` is not a valid comparator because of a orthogonal distinction between "mode developer" and "algorithm developer". Further, the paper suggests that shard_map is deficient in "not abstracting away potentially nested groups of compute nodes powering computations of mapping functions." In contrast, this paper leaves it to future work to handle "hierarchically partitioned data". The reason this is relevant is that unless DrJax can handle all the cases that shard_map can, it is not suprising that its interface is cleaner. confidence: 4 limitations: Discussed above suggestions: Some discussed above. The primary suggestion would be to remove listing 1, and go straight to listing 2. Of course, I would prefer you to explicitly pass all the information, but I understand that some people prefer global variables (maybe hidden in decorators or context managers) to pure functional programming. 107r: "elements of the same space" is meaningless - do you mean, have the same shape and dtype? 211l: when you say "structures", it might help readers to say that you mean the same thing as a JAX pytree, if that is what you mean, and if that's not what you mean, to say how you intend "structure" to be different from a pytree.
RwdyPAK2xD
DrJAX: Scalable and Differentiable MapReduce Primitives in JAX
[ "J Keith Rush", "Zachary Charles", "Zachary Garrett", "Sean Augenstein", "Nicole Elyse Mitchell" ]
We present DrJAX, a JAX-based library designed to support large-scale distributed and parallel machine learning algorithms that use MapReduce-style operations. DrJAX leverages JAX's sharding mechanisms to enable native targeting of TPUs and state-of-the-art JAX runtimes, including Pathways. DrJAX embeds building blocks for MapReduce computations as primitives in JAX. This enables three key benefits. First, DrJAX computations can be translated directly to XLA HLO, enabling flexible integration with a wide array of ML training platforms. Second, DrJAX computations are fully differentiable. Last, DrJAX computations can be interpreted out to existing batch-processing compute systems, including traditional MapReduce systems like Apache Beam and cross-device compute systems like those powering federated learning applications. We show that DrJAX provides an easily programmable, performant, and scalable framework for parallelized algorithm development.
[ "parallel machine learning", "distributed machine learning", "software", "jax", "mapreduce", "federated learning" ]
https://openreview.net/pdf?id=RwdyPAK2xD
2eDrv2f9Yo
decision
1,718,721,903,997
RwdyPAK2xD
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
QLo5lGkiyg
Variational Stochastic Gradient Descent for Deep Neural Networks
[ "Haotian Chen", "Anna Kuzina", "Babak Esmaeili", "Jakub M. Tomczak" ]
Optimizing deep neural networks (DNNs) is one of the main tasks in successful deep learning. Current state-of-the-art optimizers are adaptive gradient-based optimization methods such as Adam. Recently, there has been an increasing interest in formulating gradient-based optimizers in a probabilistic framework for better estimation of gradients and modeling uncertainties. Here, we propose to combine both approaches, resulting in the Variational Stochastic Gradient Descent (VSGD) optimizer. We model gradient updates as a probabilistic model and utilize stochastic variational inference (SVI) to derive an efficient and effective update rule. Further, we show how our VSGD method relates to other adaptive gradient-based optimizers like Adam. Lastly, we carry out experiments on two image classification datasets and three deep neural network architectures, where we show that VSGD converges faster and outperforms Adam and SGD.
[ "Optimization in DNNs", "Stochastic Variational Inference", "Probabilistic Inference", "Stochastic Gradient Descent" ]
https://openreview.net/pdf?id=QLo5lGkiyg
J81oNMTFEA
meta_review
1,718,635,208,715
QLo5lGkiyg
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission26/Area_Chair_fqRE" ]
metareview: The manuscript is formed in good shape: it models gradient updates as a probabilistic model and considers integrating Stochastic Variational Inference (SVI) to derive an efficient and effective update rule. The AC recommends the acceptance of this manuscript, given its merits of (1) introducing an interesting approach VSGD to model gradient updates as a probabilistic model, and (2) a thorough examination and discussion of existing optimizers. However, the AC would suggest the authors take the suggestions of Reviewer r6Br into account. recommendation: Accept (Poster) confidence: 3
QLo5lGkiyg
Variational Stochastic Gradient Descent for Deep Neural Networks
[ "Haotian Chen", "Anna Kuzina", "Babak Esmaeili", "Jakub M. Tomczak" ]
Optimizing deep neural networks (DNNs) is one of the main tasks in successful deep learning. Current state-of-the-art optimizers are adaptive gradient-based optimization methods such as Adam. Recently, there has been an increasing interest in formulating gradient-based optimizers in a probabilistic framework for better estimation of gradients and modeling uncertainties. Here, we propose to combine both approaches, resulting in the Variational Stochastic Gradient Descent (VSGD) optimizer. We model gradient updates as a probabilistic model and utilize stochastic variational inference (SVI) to derive an efficient and effective update rule. Further, we show how our VSGD method relates to other adaptive gradient-based optimizers like Adam. Lastly, we carry out experiments on two image classification datasets and three deep neural network architectures, where we show that VSGD converges faster and outperforms Adam and SGD.
[ "Optimization in DNNs", "Stochastic Variational Inference", "Probabilistic Inference", "Stochastic Gradient Descent" ]
https://openreview.net/pdf?id=QLo5lGkiyg
F7ravn4uSm
decision
1,718,650,410,353
QLo5lGkiyg
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
QLo5lGkiyg
Variational Stochastic Gradient Descent for Deep Neural Networks
[ "Haotian Chen", "Anna Kuzina", "Babak Esmaeili", "Jakub M. Tomczak" ]
Optimizing deep neural networks (DNNs) is one of the main tasks in successful deep learning. Current state-of-the-art optimizers are adaptive gradient-based optimization methods such as Adam. Recently, there has been an increasing interest in formulating gradient-based optimizers in a probabilistic framework for better estimation of gradients and modeling uncertainties. Here, we propose to combine both approaches, resulting in the Variational Stochastic Gradient Descent (VSGD) optimizer. We model gradient updates as a probabilistic model and utilize stochastic variational inference (SVI) to derive an efficient and effective update rule. Further, we show how our VSGD method relates to other adaptive gradient-based optimizers like Adam. Lastly, we carry out experiments on two image classification datasets and three deep neural network architectures, where we show that VSGD converges faster and outperforms Adam and SGD.
[ "Optimization in DNNs", "Stochastic Variational Inference", "Probabilistic Inference", "Stochastic Gradient Descent" ]
https://openreview.net/pdf?id=QLo5lGkiyg
DjW1wXIjvg
official_review
1,717,872,685,887
QLo5lGkiyg
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission26/Reviewer_n1YJ" ]
title: Good paper, experiments can be reported better summary: This paper integrates the probabilistic framework of stochastic gradient descent (SGD) with stochastic variational inference (SVI) to achieve efficient and effective gradient updates. The VSGD optimizer models the gradient updates as a probabilistic model, treating the true and noisy gradients as latent and observed random variables, respectively. The primary contributions of this paper include: Proposing VSGD: A novel optimizer that adopts a probabilistic approach to gradient updates, providing a better estimation of true gradients and managing gradient noise effectively. Relating VSGD to other optimizers: Establishing connections between VSGD and other adaptive gradient-based optimizers like ADAM and SGD, showing how they can be viewed as specific instances or adaptations of VSGD. Empirical evaluation: Demonstrating that VSGD outperforms ADAM and SGD on image classification tasks across various deep neural network architectures, achieving lower generalization errors and competitive convergence rates. strengths: Innovative Approach: VSGD introduces a unique probabilistic perspective to gradient descent, which helps in better managing gradient noise and achieving more accurate gradient estimates. Theoretical Insights: The paper provides a comprehensive theoretical foundation, linking VSGD to existing optimizers and demonstrating its potential as a general framework for adaptive gradient-based optimization. Flexibility: The method's probabilistic nature allows for the incorporation of prior knowledge and adaptability to different noise models, making it versatile for various deep learning tasks. weaknesses: Computational Overhead: The introduction of additional operations at each gradient update step can lead to increased computational overhead compared to simpler optimizers like ADAM and SGD. Scalability: While the paper claims scalability, the actual implementation and performance on extremely large-scale datasets or architectures are not thoroughly explored. Hyperparameter Sensitivity: Although the paper addresses the stability of VSGD, the necessity of tuning several hyperparameters, including those for SVI, might still pose a challenge in practice. Experiments: (a) Choosing the best run out of 3 random seeds can introduce a slight bias. However, this is mitigated by reporting the average and variability. Specifically considering that the results are quite close in some cases. (b) If the hyperparameters are also being tuned, it should be clearly stated whether the best model was chosen after hyperparameter tuning or if the best hyperparameters were selected based on the validation performance. confidence: 4 suggestions: Report All Metrics: Include the mean, standard deviation, and best performance across multiple runs to give a comprehensive view of the model's performance.
QLo5lGkiyg
Variational Stochastic Gradient Descent for Deep Neural Networks
[ "Haotian Chen", "Anna Kuzina", "Babak Esmaeili", "Jakub M. Tomczak" ]
Optimizing deep neural networks (DNNs) is one of the main tasks in successful deep learning. Current state-of-the-art optimizers are adaptive gradient-based optimization methods such as Adam. Recently, there has been an increasing interest in formulating gradient-based optimizers in a probabilistic framework for better estimation of gradients and modeling uncertainties. Here, we propose to combine both approaches, resulting in the Variational Stochastic Gradient Descent (VSGD) optimizer. We model gradient updates as a probabilistic model and utilize stochastic variational inference (SVI) to derive an efficient and effective update rule. Further, we show how our VSGD method relates to other adaptive gradient-based optimizers like Adam. Lastly, we carry out experiments on two image classification datasets and three deep neural network architectures, where we show that VSGD converges faster and outperforms Adam and SGD.
[ "Optimization in DNNs", "Stochastic Variational Inference", "Probabilistic Inference", "Stochastic Gradient Descent" ]
https://openreview.net/pdf?id=QLo5lGkiyg
DAhSHbfsPN
official_review
1,718,239,428,681
QLo5lGkiyg
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission26/Reviewer_r6Br" ]
title: Review of variational SGD summary: The paper proposes a probabilistic interpretation of gradient descent by representing gradients as random variables to better model the true gradient from gradient noise. strengths: - The paper discusses the current state-of-the-art optimizers well and existing literature on the topic. - The paper also discusses how the proposed optimizer theoretically compares to ADAM and normalized SGD. weaknesses: - Major: The choice of experimentation is limited to classification. The authors should have performed experimentation with more complex objectives that suffer from noisy loss landscapes, such as SVI, MCMC, or RL, to support the authors' claim that "In VSGD we model the true gradient and the noisy gradient..…allows us to manage gradient noise more effectively…" - Although SGD requires some hyperparameter tuning, the proposed VSGD adds additional hyperparameters ($\kappa_1, \kappa_2, \gamma, K_g$). How would these hyperparameters be set? Through a grid search? According to Appendix C, $\gamma$ significantly impacts the model's performance. - How is the prior selected? - Why are the latent variables assumed to be Gamma-distributed? - The gradient sampling is not discussed. Per my understanding of the probabilistic interpretation of the gradients, how many gradient samples are required to minimize the noise in the loss landscape? - Why is $K_g$ recommended to be set to 30? If higher is better, what would setting $K_g$ to a larger value accomplish? Given a fixed $\gamma$ of 1e-8, how much would the $K_g$ hyperparameter influence the parameter update equations? - In Figure3 for tinyImageNet in VGG19-bn, SGD shows a higher performance faster than both ADAM and VSGD, do the authors have explanation for this? confidence: 3 limitations: Please see above weaknesses regarding choice of experiments/tasks. suggestions: - Conduct experiments with problems that exhibit a noisier loss landscape, such as Bayesian Neural Networks (BNNs) or Reinforcement Learning (RL). This will provide insights into the robustness of the proposed method. - Investigate whether the proposed method handles escaping local minima better than existing methods. One effective approach is to use pre-trained weights and fine-tune the model. This will help determine if the proposed optimizer achieves parameter updates and loss minimization more efficiently compared to current optimizers. - Typo: line 193 "argmax $L(\theta, D)$" should be argmin. - Line 224: "$\gamma \rightarrow \infty$", $K_g \rightarrow 0$, and $\gamma K_g \rightarrow \infty$" this statement is mathematically incorrect. The $\lim_{\gamma \rightarrow \infty, K_g \rightarrow 0} (\gamma K_g)$ is undefined. Please correct or clarify.
NYYCcueKI1
Coarse-to-Fine Semi-Structured Pruning of Graph Convolutional Networks for Skeleton-based Recognition
[ "Hichem Sahbi" ]
Deep neural networks (DNNs) are nowadays witnessing a major success in solving many pattern recognition tasks including skeleton-based classification. The deployment of DNNs on edge-devices, endowed with limited time and memory resources, requires designing lightweight and efficient variants of these networks. Pruning is one of the lightweight network design techniques that operate by removing unnecessary network parts, in a structured or an unstructured manner, including individual weights, neurons or even entire channels. Nonetheless, structured and unstructured pruning methods, when applied separately, may either be inefficient or ineffective. In this paper, we devise a novel semi-structured method that discards the downsides of structured and unstructured pruning while gathering their upsides to some extent. The proposed solution is based on a differentiable cascaded parametrization which combines (i) a band-stop mechanism that prunes weights depending on their magnitudes, (ii) a weight-sharing parametrization that prunes connections either individually or group-wise, and (iii) a gating mechanism which arbitrates between different group-wise and entry-wise pruning. All these cascaded parametrizations are built upon a common latent tensor which is trained end-to-end by minimizing a classification loss and a surrogate tensor rank regularizer. Extensive experiments, conducted on the challenging tasks of action and hand-gesture recognition, show the clear advantage of our proposed semi-structured pruning approach against both structured and unstructured pruning, when taken separately, as well as the related work.
[ "Coarse and fine-grained pruning", "semi-structured pruning", "graph convolutional networks", "skeleton-based recognition" ]
https://openreview.net/pdf?id=NYYCcueKI1
opzbeNK6BM
official_review
1,718,306,572,268
NYYCcueKI1
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission41/Reviewer_Dwcd" ]
title: A good paper with an interesting approach summary: In this paper, the authors propose an approach to prune graph convolutional networks (GCNs). Their method involves a 3-stage cascaded pruning mechanism that prunes low magnitude weights while achieving a balance between pruning blocks and pruning individual neurons. The authors show that their method achieves significant computational speedups relative to the baseline. strengths: 1. Strong results - their method achieves considerable speedups relative to the baseline and a few competitors when pruning takes place. The baseline network is also more accurate than competitors. 2. Interesting approach - the idea of achieving a balance between pruning individual weights and weight blocks showcases a nice tradeoff between runtime and accuracy. 3. The ablation study is detailed. weaknesses: 1. It is not clear to me why Equation 7 is an upper bound on the rank. It would be useful to explain this better in the paper. 2. It would be useful to know how much rank-optimization has reduced the rank of the weight matrix compared to the baseline (and perhaps how it changes for other methods in literature). This is not shown in current results. 3. The details of constructing the row and column wise adjacency matrices are not very clear. How are they constructed, and do they stay constant throughout training? confidence: 3 limitations: Minor weaknesses (as expressed above)
NYYCcueKI1
Coarse-to-Fine Semi-Structured Pruning of Graph Convolutional Networks for Skeleton-based Recognition
[ "Hichem Sahbi" ]
Deep neural networks (DNNs) are nowadays witnessing a major success in solving many pattern recognition tasks including skeleton-based classification. The deployment of DNNs on edge-devices, endowed with limited time and memory resources, requires designing lightweight and efficient variants of these networks. Pruning is one of the lightweight network design techniques that operate by removing unnecessary network parts, in a structured or an unstructured manner, including individual weights, neurons or even entire channels. Nonetheless, structured and unstructured pruning methods, when applied separately, may either be inefficient or ineffective. In this paper, we devise a novel semi-structured method that discards the downsides of structured and unstructured pruning while gathering their upsides to some extent. The proposed solution is based on a differentiable cascaded parametrization which combines (i) a band-stop mechanism that prunes weights depending on their magnitudes, (ii) a weight-sharing parametrization that prunes connections either individually or group-wise, and (iii) a gating mechanism which arbitrates between different group-wise and entry-wise pruning. All these cascaded parametrizations are built upon a common latent tensor which is trained end-to-end by minimizing a classification loss and a surrogate tensor rank regularizer. Extensive experiments, conducted on the challenging tasks of action and hand-gesture recognition, show the clear advantage of our proposed semi-structured pruning approach against both structured and unstructured pruning, when taken separately, as well as the related work.
[ "Coarse and fine-grained pruning", "semi-structured pruning", "graph convolutional networks", "skeleton-based recognition" ]
https://openreview.net/pdf?id=NYYCcueKI1
jRShiwUVGP
decision
1,718,650,806,844
NYYCcueKI1
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
NYYCcueKI1
Coarse-to-Fine Semi-Structured Pruning of Graph Convolutional Networks for Skeleton-based Recognition
[ "Hichem Sahbi" ]
Deep neural networks (DNNs) are nowadays witnessing a major success in solving many pattern recognition tasks including skeleton-based classification. The deployment of DNNs on edge-devices, endowed with limited time and memory resources, requires designing lightweight and efficient variants of these networks. Pruning is one of the lightweight network design techniques that operate by removing unnecessary network parts, in a structured or an unstructured manner, including individual weights, neurons or even entire channels. Nonetheless, structured and unstructured pruning methods, when applied separately, may either be inefficient or ineffective. In this paper, we devise a novel semi-structured method that discards the downsides of structured and unstructured pruning while gathering their upsides to some extent. The proposed solution is based on a differentiable cascaded parametrization which combines (i) a band-stop mechanism that prunes weights depending on their magnitudes, (ii) a weight-sharing parametrization that prunes connections either individually or group-wise, and (iii) a gating mechanism which arbitrates between different group-wise and entry-wise pruning. All these cascaded parametrizations are built upon a common latent tensor which is trained end-to-end by minimizing a classification loss and a surrogate tensor rank regularizer. Extensive experiments, conducted on the challenging tasks of action and hand-gesture recognition, show the clear advantage of our proposed semi-structured pruning approach against both structured and unstructured pruning, when taken separately, as well as the related work.
[ "Coarse and fine-grained pruning", "semi-structured pruning", "graph convolutional networks", "skeleton-based recognition" ]
https://openreview.net/pdf?id=NYYCcueKI1
bjO3F4HEzA
official_review
1,718,228,785,792
NYYCcueKI1
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission41/Reviewer_h3N6" ]
title: Semi-structured cascading pruning masks result in better accuracy and speedup tradeoffs in GCNs for skeleton-based recognition tasks. summary: The authors utilize a cascading mask-aggregate-selection parametrization to dynamically learn both structured and unstructured pruning masks, resulting in a *semi-structured* approach that combines the accuracy gains from unstructured pruning with the architectural speedups from structured pruning. To achieve this, each weight tensor has a series of masks applied to it. Each mask is a function of the previous (the first takes the weight tensor as the input) and sequentially fulfills one of the three steps: 1. Masking the smallest weights in a traditional unstructured manner 2. Weight-sharing across entries, rows, columns and channels, allowing for semi-structured pruning 3. A gating mechanism which selects block,column/row, or entry wise as the pruning mask for a particular tensor. They then apply their method to graph convolutional networks specifically on skeleton-based recognition, demonstrating superior results over regularization-based pruning methods. strengths: - The paper is generally well-written and is relatively easy to follow. - The method of semi-structured pruning is interesting and seems novel in this context. - The proposed method demonstrates impressive results and clearly combines the accuracy strengths of unstructured pruning with the speedup provided by structured pruning. - Equations and figures are clear and well-formatted. weaknesses: - Tables 2 and 3 seem a little unnecessary - the paper focuses specifically on the pruning of GCNs, not on the performance of baseline GCNs themselves, so dedicating half a page to comparing other architectures with GCNs on recognition tasks rather than the proposed pruning method itself is a slightly confusing decision. - Some sentences run long and could be better worded. (e.g. "Pruning is one of the lightweight network design techniques that operate by removing unnecessary network parts, in a structured or an unstructured manner, including individual weights, neurons or even entire channels." in the abstract). confidence: 3 limitations: - Given the seemingly general applicability of the proposed method, I question why the authors limited their scope purely to graph convolutional networks and skeleton-based recognition. It is clear that the results demonstrated are impressive, so perhaps it would be insightful to measure the effectiveness of this algorithm across architectures and tasks. suggestions: - Utilize the space of Tables 2 and 3 for additional method-specific results. - In addition to rewording some sentences that run long, there are some small typos throughout the paper. For example, "This allows implementing an annealed (soft) thresholding function that cuts-off all the connections in smooth..." on line 165 is missing an "a" between "in" and "smooth." It will be good to comb the paper over once more to address minor errors like these.
NYYCcueKI1
Coarse-to-Fine Semi-Structured Pruning of Graph Convolutional Networks for Skeleton-based Recognition
[ "Hichem Sahbi" ]
Deep neural networks (DNNs) are nowadays witnessing a major success in solving many pattern recognition tasks including skeleton-based classification. The deployment of DNNs on edge-devices, endowed with limited time and memory resources, requires designing lightweight and efficient variants of these networks. Pruning is one of the lightweight network design techniques that operate by removing unnecessary network parts, in a structured or an unstructured manner, including individual weights, neurons or even entire channels. Nonetheless, structured and unstructured pruning methods, when applied separately, may either be inefficient or ineffective. In this paper, we devise a novel semi-structured method that discards the downsides of structured and unstructured pruning while gathering their upsides to some extent. The proposed solution is based on a differentiable cascaded parametrization which combines (i) a band-stop mechanism that prunes weights depending on their magnitudes, (ii) a weight-sharing parametrization that prunes connections either individually or group-wise, and (iii) a gating mechanism which arbitrates between different group-wise and entry-wise pruning. All these cascaded parametrizations are built upon a common latent tensor which is trained end-to-end by minimizing a classification loss and a surrogate tensor rank regularizer. Extensive experiments, conducted on the challenging tasks of action and hand-gesture recognition, show the clear advantage of our proposed semi-structured pruning approach against both structured and unstructured pruning, when taken separately, as well as the related work.
[ "Coarse and fine-grained pruning", "semi-structured pruning", "graph convolutional networks", "skeleton-based recognition" ]
https://openreview.net/pdf?id=NYYCcueKI1
GlQnlnJoHJ
meta_review
1,718,575,361,528
NYYCcueKI1
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission41/Area_Chair_jNdA" ]
metareview: ## Strengths * The paper is well written and organized * The approach is novel and interesting * The approach provides good performance results by combining the best of both worlds ## Weaknesess * The paper would benefit from a wider range of experiments (accross larger classes of neural networks beyong GCN) to assess the generalizability of the approach * More comparison with other approaches at the same problem like HAN are needed to assess the benefit compared the state-of-the-art solutions The general sentiment about this paper is rather positive, I recommend acceptance as an oral presentation. recommendation: Accept (Oral) confidence: 4
NYYCcueKI1
Coarse-to-Fine Semi-Structured Pruning of Graph Convolutional Networks for Skeleton-based Recognition
[ "Hichem Sahbi" ]
Deep neural networks (DNNs) are nowadays witnessing a major success in solving many pattern recognition tasks including skeleton-based classification. The deployment of DNNs on edge-devices, endowed with limited time and memory resources, requires designing lightweight and efficient variants of these networks. Pruning is one of the lightweight network design techniques that operate by removing unnecessary network parts, in a structured or an unstructured manner, including individual weights, neurons or even entire channels. Nonetheless, structured and unstructured pruning methods, when applied separately, may either be inefficient or ineffective. In this paper, we devise a novel semi-structured method that discards the downsides of structured and unstructured pruning while gathering their upsides to some extent. The proposed solution is based on a differentiable cascaded parametrization which combines (i) a band-stop mechanism that prunes weights depending on their magnitudes, (ii) a weight-sharing parametrization that prunes connections either individually or group-wise, and (iii) a gating mechanism which arbitrates between different group-wise and entry-wise pruning. All these cascaded parametrizations are built upon a common latent tensor which is trained end-to-end by minimizing a classification loss and a surrogate tensor rank regularizer. Extensive experiments, conducted on the challenging tasks of action and hand-gesture recognition, show the clear advantage of our proposed semi-structured pruning approach against both structured and unstructured pruning, when taken separately, as well as the related work.
[ "Coarse and fine-grained pruning", "semi-structured pruning", "graph convolutional networks", "skeleton-based recognition" ]
https://openreview.net/pdf?id=NYYCcueKI1
C67jAyj3UF
official_review
1,718,111,043,389
NYYCcueKI1
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission41/Reviewer_jZvT" ]
title: A novel semi-structured technique, however, the experiment section needs to be revised. summary: Overall rating: Borderline accept strengths: This paper proposes a GCN-based skeleton recognition model with Coarse-to-Fine Semi-Structured Pruning optimization to speed up the training and inference process. To extend traditional MP approaches that leverage both structured and unstructured pruning. The proposed solution exploits a semi-structured pruning technique that applies a band-stop mechanism, a weight-sharing, and a gating mechanism. S1: The paper is well organized and includes comprehensive related works to introduce the existing techniques for the problem to the audience. S2: The semi-structured technique is novel and can speed up the process According to the experiments. weaknesses: W1: The introduction section may need to point out the advantages of GCN-based solution for the skeleton recognition problem compared with traditional CNN-based solution. Skeleton pruning is a well-studied problem also for traditional CNN as well. The audience may have questions why the does paper focus on the pruning problem especially for GCN. W2: The experiment section needs to be revised. Readers are very easy to get lost. 1) All the proposed methods and variants should be named. For now is “our GCN baseline”. It is hard for readers to follow. 2) Some important comparisons are missing. According to the introduction section, HAN (Liu et al., 2021) is the most related work. However, in the experiments, there is no comparison of “speed up” and “accuracy” between the proposed solution and HAN. Even though these two methods are different, but the experiments still need to show a comparison between the proposed methods and HAN, even with other related methods. Instead of just showing the performance of proposed method by varying the pruning rate. 3) The experiment section should introduce all the baselines with more information. confidence: 3
NKDC2mG4hT
Towards Efficient and Scalable Training of Differentially Private Deep Learning
[ "Sebastian Rodriguez Beltran", "Marlon Tobaben", "Niki Andreas Loppi", "Antti Honkela" ]
Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The major drawback of DP-SGD is the drop in utility which prior work has comprehensively studied. However, in practice another major drawback that hinders the large-scale deployment is the significantly higher computational cost. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP and benchmark methods that aim at reducing the cost. Among these are more efficient implementations of DP-SGD and training with lower precision. Finally, we study the scaling behaviour using up to 80 GPUs.
[ "differential privacy", "gradient based optimization", "computational efficiency", "distributed computing" ]
https://openreview.net/pdf?id=NKDC2mG4hT
stWMO1F5YE
official_review
1,718,003,013,478
NKDC2mG4hT
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission20/Reviewer_5P2H" ]
title: The paper analyzes different implementations, which helps optimize method performance. Despite the lack of various metrics and further improvements, the efficiency of DP-SGD is worth studying. summary: This paper studies the efficiency problem of DP-SGD and locates the critical factors that lead to high computational costs. With these analyses, the paper concludes the directions to improve DP-SGD for efficiency. The paper provides experimental contributions in various levels of code implementation. strengths: 1. There are sufficient experimental results to support the claims. 2. The advantages and disadvantages of various implementations are analyzed. Some critical points are located, which helps the community apply or improve them. weaknesses: 1.Lack of model performance reports (e.g., accuracy) under different implementation conditions. Different implementations could not always show similar results. confidence: 3
NKDC2mG4hT
Towards Efficient and Scalable Training of Differentially Private Deep Learning
[ "Sebastian Rodriguez Beltran", "Marlon Tobaben", "Niki Andreas Loppi", "Antti Honkela" ]
Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The major drawback of DP-SGD is the drop in utility which prior work has comprehensively studied. However, in practice another major drawback that hinders the large-scale deployment is the significantly higher computational cost. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP and benchmark methods that aim at reducing the cost. Among these are more efficient implementations of DP-SGD and training with lower precision. Finally, we study the scaling behaviour using up to 80 GPUs.
[ "differential privacy", "gradient based optimization", "computational efficiency", "distributed computing" ]
https://openreview.net/pdf?id=NKDC2mG4hT
Vm4oq7sGWY
official_review
1,718,305,296,738
NKDC2mG4hT
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission20/Reviewer_yvJ4" ]
title: Official Review summary: This paper presents a set of comprehensive experiments and analysis exploring multiple aspects of scaling and efficiency of training with DP-SGD. The authors benchmark multiple frameworks and implementations, for various model sizes. The experiments are conducted on a vision dataset (CIFAR-100). strengths: 1. The experiments are comprehensive, exploring multiple aspects of efficiency while training with DP-SGD. 2. I believe that the insights will be very useful to practitioners. weaknesses: 1. While I understand the focus of the study is not finding the best utility, end task performance is not reported for any experiment, which makes interpretations of some results hard to contextualize. confidence: 3 suggestions: 1. There is no mention of availability of code - the poisson subsampling implementation for other frameworks will be useful to many. 2. As mentioned before, I would appreciate some numbers showing final performance, including on some other modalities like text.
NKDC2mG4hT
Towards Efficient and Scalable Training of Differentially Private Deep Learning
[ "Sebastian Rodriguez Beltran", "Marlon Tobaben", "Niki Andreas Loppi", "Antti Honkela" ]
Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The major drawback of DP-SGD is the drop in utility which prior work has comprehensively studied. However, in practice another major drawback that hinders the large-scale deployment is the significantly higher computational cost. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP and benchmark methods that aim at reducing the cost. Among these are more efficient implementations of DP-SGD and training with lower precision. Finally, we study the scaling behaviour using up to 80 GPUs.
[ "differential privacy", "gradient based optimization", "computational efficiency", "distributed computing" ]
https://openreview.net/pdf?id=NKDC2mG4hT
NppbKnUCLh
official_review
1,718,310,211,032
NKDC2mG4hT
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission20/Reviewer_aKwa" ]
title: Serious empirical study on improving DP-SGD throughput summary: The authors propose an empirical study of the performance bottlenecks of differentially private stochastic gradient descent (DP-SGD). Besides the per-sample clipping, the authors identify by profiling that per-sample gradients introduce significant overhead in training compared to non-private training. They benchmark different strategies to reduce this overhead including ghost clipping, book keeping, implementation in JAX, and lower precision. Experiments show that these techniques increase throughput and scale with several GPUs. strengths: - The paper is clearly written and well organized; - The work covers different methods to improve DP-SGD throughput. It provides empirical evidence of their respective success. weaknesses: The paper has no obvious weakness. The empirical study seems well conducted. The novelty and the interest of its findings may be emphasized more. The paper raises the following question: - It will be interesting to provide more details, even assumptions, on why the JAX implementation is outperforming the one with Pytorch; - I will be curious to know if besides exploring JAX implementation, there are other avenues for accelerating DP-SGD, from new algorithms to other implementation improvement, possibly relying on tailored CUDA kernels. confidence: 3
NKDC2mG4hT
Towards Efficient and Scalable Training of Differentially Private Deep Learning
[ "Sebastian Rodriguez Beltran", "Marlon Tobaben", "Niki Andreas Loppi", "Antti Honkela" ]
Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The major drawback of DP-SGD is the drop in utility which prior work has comprehensively studied. However, in practice another major drawback that hinders the large-scale deployment is the significantly higher computational cost. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP and benchmark methods that aim at reducing the cost. Among these are more efficient implementations of DP-SGD and training with lower precision. Finally, we study the scaling behaviour using up to 80 GPUs.
[ "differential privacy", "gradient based optimization", "computational efficiency", "distributed computing" ]
https://openreview.net/pdf?id=NKDC2mG4hT
DvVTMUUEg1
decision
1,718,650,719,175
NKDC2mG4hT
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
NKDC2mG4hT
Towards Efficient and Scalable Training of Differentially Private Deep Learning
[ "Sebastian Rodriguez Beltran", "Marlon Tobaben", "Niki Andreas Loppi", "Antti Honkela" ]
Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The major drawback of DP-SGD is the drop in utility which prior work has comprehensively studied. However, in practice another major drawback that hinders the large-scale deployment is the significantly higher computational cost. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP and benchmark methods that aim at reducing the cost. Among these are more efficient implementations of DP-SGD and training with lower precision. Finally, we study the scaling behaviour using up to 80 GPUs.
[ "differential privacy", "gradient based optimization", "computational efficiency", "distributed computing" ]
https://openreview.net/pdf?id=NKDC2mG4hT
1LCAeqIyJ9
meta_review
1,718,575,196,157
NKDC2mG4hT
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission20/Area_Chair_V76z" ]
metareview: ## Strengths * This paper is well written * The paper contains several useful insights for the community about how to improve DP-SGD performance * Several comprehensive experiments support the claims, and are analyzed in detail ## Weaknesses * The paper lacks results about end task performance, that could help provide insights about the tradeoff between computing cost and task performance The general sentiment appears to be rather positive, I recomment acceptance as a poster presentation. recommendation: Accept (Poster) confidence: 4
KsUUzxUK7N
Lowering PyTorch's Memory Consumption for Selective Differentiation
[ "Samarth Bhatia", "Felix Dangel" ]
Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs to layers that act linearly in their parameters and inputs (dense, convolution, or normalization layers in evaluation mode) can be discarded whenever the parameters are marked as non-differentiable. We provide a drop-in, differentiability-agnostic implementation of such layers and demonstrate its ability to reduce memory without affecting run time on popular convolution- and attention-based architectures.
[ "Selective automatic differentiation", "fine-tuning", "Backpropagation", "memory efficiency" ]
https://openreview.net/pdf?id=KsUUzxUK7N
Mreo1ijqDk
decision
1,718,650,772,673
KsUUzxUK7N
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
KsUUzxUK7N
Lowering PyTorch's Memory Consumption for Selective Differentiation
[ "Samarth Bhatia", "Felix Dangel" ]
Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs to layers that act linearly in their parameters and inputs (dense, convolution, or normalization layers in evaluation mode) can be discarded whenever the parameters are marked as non-differentiable. We provide a drop-in, differentiability-agnostic implementation of such layers and demonstrate its ability to reduce memory without affecting run time on popular convolution- and attention-based architectures.
[ "Selective automatic differentiation", "fine-tuning", "Backpropagation", "memory efficiency" ]
https://openreview.net/pdf?id=KsUUzxUK7N
Hn8zC2YK1X
official_review
1,717,765,691,973
KsUUzxUK7N
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission34/Reviewer_x1T2" ]
title: Good Idea but Maybe Too Simple summary: Authors present a simple optimization of memory usage in scenarios where not all parameters are required to be differentiable (e.g. finetuning, adapters, etc.). Idea is clear and simple. They rely on a specific version of a specific framework (PyTorch), which limits its usage. Authors didn't show how recent Torch's compilers affects memory usage. I leave it to workshop organizers to decide if this paper should be present there strengths: * Very clear and simple optimization idea weaknesses: * After PyTorch major version 2, I would like to see how just-in-time compiler affects this behavior and hence memory usage * (Philosophical) Authors treat PyTorch as a black box, they conduct experiments to check memory consumption and how autodiff works, though its code is open sourced, it's not a black box overall confidence: 4 limitations: It only works with PyTorch of specific version, for the current moment (June 7, 2024) the version used in the paper is outdated, so potentially the autodiff's behavior shown in the paper could have changed already or might change in future. suggestions: It would be nice to see if this behavior persists in other frameworks and if it changes after compilation
KsUUzxUK7N
Lowering PyTorch's Memory Consumption for Selective Differentiation
[ "Samarth Bhatia", "Felix Dangel" ]
Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs to layers that act linearly in their parameters and inputs (dense, convolution, or normalization layers in evaluation mode) can be discarded whenever the parameters are marked as non-differentiable. We provide a drop-in, differentiability-agnostic implementation of such layers and demonstrate its ability to reduce memory without affecting run time on popular convolution- and attention-based architectures.
[ "Selective automatic differentiation", "fine-tuning", "Backpropagation", "memory efficiency" ]
https://openreview.net/pdf?id=KsUUzxUK7N
A88AzlFRUt
official_review
1,718,258,064,653
KsUUzxUK7N
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission34/Reviewer_PzSW" ]
title: Encouraging preliminary experiments, but still insufficiently formalized regarding relevant and useful observation summary: This paper presents a novel technique for saving memory for PyTorch activations in the case where some layers are non-differentiable. strengths: The initial observation about PyTorch's behavior is interesting. It would have been interesting to extend it to other frameworks to check if this is a PyTorch specific feature. weaknesses: In my opinion, the paper lacks a more formal analysis to assess when descendants should inherit the differentiable character. This more formal study would be a first step towards more automatic detection (using compilation tools) and would help to strengthen the findings. confidence: 4 limitations: Finally, the proposed solution lacks generality and requires rewriting certain layers and models, and is still limited in scope (e.g., for normalization layers). suggestions: In conclusion, this is an interesting and original paper that starts from an observation about PyTorch's behavior that is useful in practice for limiting memory consumption in many interesting contexts (adversarial examples, fine-tuning,...). There's still a lot of work to be done to formalize, generalize and automate, but it's an interesting paper for the WS audience, with a convincing set of preliminary experiments.
KsUUzxUK7N
Lowering PyTorch's Memory Consumption for Selective Differentiation
[ "Samarth Bhatia", "Felix Dangel" ]
Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs to layers that act linearly in their parameters and inputs (dense, convolution, or normalization layers in evaluation mode) can be discarded whenever the parameters are marked as non-differentiable. We provide a drop-in, differentiability-agnostic implementation of such layers and demonstrate its ability to reduce memory without affecting run time on popular convolution- and attention-based architectures.
[ "Selective automatic differentiation", "fine-tuning", "Backpropagation", "memory efficiency" ]
https://openreview.net/pdf?id=KsUUzxUK7N
6Qi1sih5eA
meta_review
1,718,575,280,613
KsUUzxUK7N
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission34/Area_Chair_kjsK" ]
metareview: ## Strengths * This paper describes a very clear idea, that does allow to obtain better memory consumption * Experiments are convincing ## Weaknesses * It is not clear how general the results of the paper can be. * The experiments are very preliminary and require more generalization and automation * Some of the insight could be found by reading the code rather than experimenting The general sentiment about this paper is that the idea is interesting, despite the results being rather preliminary. I recommend acceptance as a poster. recommendation: Accept (Poster) confidence: 3
JvG3BLkteR
LoQT: Low Rank Adapters for Quantized Training
[ "Sebastian Bugge Loeschcke", "Mads Toftrup", "Michael Kastoryano", "Serge Belongie", "Vésteinn Snæbjarnarson" ]
Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose LoQT, a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, achieving similar performance to full training, which we demonstrate experimentally for language modeling and downstream task adaptation. We find that LoQT enables efficient training of models up to 13B parameters on a consumer-grade 24GB GPU.
[ "Quantization", "Low-Rank Adaptation", "Memory Efficient Training", "Large Language Models" ]
https://openreview.net/pdf?id=JvG3BLkteR
l1Devf1sfW
official_review
1,718,119,775,642
JvG3BLkteR
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission25/Reviewer_GedX" ]
title: LoQT: a combination of QLoRA and GaLore methods for low-memory pretraining and fine-tuning summary: The LoQT paper is building on top of recent quantization and low rank methods to present a novel method for pretraining and fine-tuning algorithm, enabling LLM such as Llama 13B to fit entirely on a single consumer GPU while matching accuracy of higher precision and full rank training. The main idea of the paper is to re-use GaLore [1] low-rank gradient projection and combine it with the now common LoRA weight decomposition. As noticed by the authors, on every time interval GaLore projection matrix is constant, one does not need to update the full weight tensors of a model, but can just update the low-rank adapter. While this aspect in itself does not reduce memory footprint during training (as full weights still need to be kept), it can be combined with QLoRA: the main weights can be efficiently quantized to 4bits using the NF4 data format, reducing the model footprint by a factor 2. Finally, combined with 8-bit Adam optimizer techniques, the authors show that Llama 13B could be trained on a single GPU with 24GB of memory. [1] https://arxiv.org/pdf/2403.03507 [2] https://arxiv.org/abs/2305.14314 strengths: The main strength of the paper is to present a novel training scheme optimizing memory footprint on all fronts (i.e. model state, optimizer state and gradients), while maintaining the same accuracy as full rank + full precision training. It builds elegantly on top of GaLore and QLora methods. The authors have extensive experiments (as well as ablation studies) on different model size to show the robustness of their method. LoQT has the potential of being widely adopted by the machine learning communities as it helps lowering the hardware compute budget necessary for pretraining and fine-tuning LLMs. weaknesses: The main (small) weakness of the paper is the potential brittleness of the projection update frequency, i.e. $100 + 1.2^T$. The ablation study shows that a scheme more complex than a constant update frequency is necessary, but the downside is then the introduction of additional scheduling mechanism (and hyper-parameters associated) in the training scheme, on top of the classic learning rate schedule. It could be potentially interesting to investigate if it can be replaced that by a more "dynamic" rule, checking that the update $B_{T-1}$ is above a certain level of NF4 quantization noise to trigger the main weight $W_T$ update. From the perspective and experience of low precision training literature, it would feel like a more robust approach than a pre-determined scheduling rule. On the presentation aspect, I believe GaLore weaknesses are slightly mis-represented and overstated in Section 4.4. It is fairly easy to apply GaLore gradient projection $P^T G_t$ directly inside the backward pass of a model, meaning using GaLore leads to gradient memory savings even when per-layer update is not applied. And additionally, it also means that it can be combined efficiently with gradient accumulation or/and DDP, as these methods can be directly done in the low-rank gradient space (the projection being a linear operator). confidence: 4
JvG3BLkteR
LoQT: Low Rank Adapters for Quantized Training
[ "Sebastian Bugge Loeschcke", "Mads Toftrup", "Michael Kastoryano", "Serge Belongie", "Vésteinn Snæbjarnarson" ]
Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose LoQT, a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, achieving similar performance to full training, which we demonstrate experimentally for language modeling and downstream task adaptation. We find that LoQT enables efficient training of models up to 13B parameters on a consumer-grade 24GB GPU.
[ "Quantization", "Low-Rank Adaptation", "Memory Efficient Training", "Large Language Models" ]
https://openreview.net/pdf?id=JvG3BLkteR
R5YGJVgx5J
meta_review
1,718,575,229,426
JvG3BLkteR
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission25/Area_Chair_1jYL" ]
metareview: ## Strengths * This paper is well written and the methods are clearly explained and motivated * The paper contains both extensive experiments and a theoretical justification * This approach enables training a 13B parameter on a single 24GB GPU, which has a high potential for a strong impact in the community ## Weaknesses * Some specific points could be explained more clearly * This approach adds a new hyper-parameter for projection update frequency, and it is unclear what the best value is. Maybe a dynamic approach would be more generalizable to other contexts/neural networks The general sentiment is very largerly positive, I recommend acceptance as an oral presentation. recommendation: Accept (Oral) confidence: 4
JvG3BLkteR
LoQT: Low Rank Adapters for Quantized Training
[ "Sebastian Bugge Loeschcke", "Mads Toftrup", "Michael Kastoryano", "Serge Belongie", "Vésteinn Snæbjarnarson" ]
Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose LoQT, a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, achieving similar performance to full training, which we demonstrate experimentally for language modeling and downstream task adaptation. We find that LoQT enables efficient training of models up to 13B parameters on a consumer-grade 24GB GPU.
[ "Quantization", "Low-Rank Adaptation", "Memory Efficient Training", "Large Language Models" ]
https://openreview.net/pdf?id=JvG3BLkteR
Oi2aoOtjJO
decision
1,718,650,747,538
JvG3BLkteR
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Oral) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
JvG3BLkteR
LoQT: Low Rank Adapters for Quantized Training
[ "Sebastian Bugge Loeschcke", "Mads Toftrup", "Michael Kastoryano", "Serge Belongie", "Vésteinn Snæbjarnarson" ]
Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose LoQT, a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, achieving similar performance to full training, which we demonstrate experimentally for language modeling and downstream task adaptation. We find that LoQT enables efficient training of models up to 13B parameters on a consumer-grade 24GB GPU.
[ "Quantization", "Low-Rank Adaptation", "Memory Efficient Training", "Large Language Models" ]
https://openreview.net/pdf?id=JvG3BLkteR
GiVGDFLmf5
official_review
1,718,337,416,807
JvG3BLkteR
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission25/Reviewer_HumH" ]
title: LoQT: Low Rank Adapters for Quantized Training summary: The study introduces a novel language model training method Low Rank Adapters for Quantized Training (LoQT). This method combines ideas from GaLore and quantization, enabling the training of models with 13 billion parameters on consumer-grade GPUs. In LoQT, the weight update is decomposed into low-rank matrices P and B. P is initialized using the SVD decomposition of the gradient of the weight, and B is initialized to reduce the quantization error. Only B is trained in LoQT. PB is merged back to the full rank matrix W after certain update steps. This process continues until the training stops. LoQT performs better than GaLore in pre-training language models while saving memory. This observation is evident from 60M to 1B parameter models. LoQT performs better than GaLore and LoRA in finetuning when trained and evaluated on GLUE. strengths: - The paper is well-written, with clear explanations of the motivation, method, and results, making it easy to follow. - LoQT effectiveness in pre-training and fine-tuning is supported by extensive experiments. - Theoretical justification is provided for the claims, including the derivation on how P.T G can be replaced by B. - The study includes numerous ablation studies, providing a comprehensive understanding of the method. - The method allows fitting a 13B parameter model on a single GPU, an important step toward memory-efficient pre-training of large language models. - Overall, this study is solid and impactful. weaknesses: - The motivation and derivation to initialize the B matrix with P^(-1)(W_q - W) can be explained more clearly and in detail. - The paper would benefit from insights on why LoQT-nq uses more memory than GaLore in pre-training models. confidence: 3
JvG3BLkteR
LoQT: Low Rank Adapters for Quantized Training
[ "Sebastian Bugge Loeschcke", "Mads Toftrup", "Michael Kastoryano", "Serge Belongie", "Vésteinn Snæbjarnarson" ]
Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose LoQT, a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, achieving similar performance to full training, which we demonstrate experimentally for language modeling and downstream task adaptation. We find that LoQT enables efficient training of models up to 13B parameters on a consumer-grade 24GB GPU.
[ "Quantization", "Low-Rank Adaptation", "Memory Efficient Training", "Large Language Models" ]
https://openreview.net/pdf?id=JvG3BLkteR
CHW3k4e0p2
official_review
1,718,310,160,844
JvG3BLkteR
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission25/Reviewer_thmZ" ]
title: Nice work on the parameter efficient training for quantized models. summary: This paper proposes LoQT which is suitable for quantized models for both pretraining and finetuning. The method iteratively updates the weight matrix. As a result, LoQT achieves better compression and in the meanwhile the best performance in eval. strengths: 1. The paper is clearly written and the method is explained in details. 2. Quantitative analysis and comparison with other LoRA variants are very comprehensive. weaknesses: 1. The method is a bit more complicated than vanilla lora since it composes iterative merge/refactoring steps. 2. What's the intuition of not updating P during the training cycle? confidence: 3
IpMMl92TJA
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
[ "Brian R. Bartoldson", "James Diffenderfer", "Konstantinos Parasyris", "Bhavya Kailkhura" ]
This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$\%, but SOTA robustness to $\ell_{\infty}$-norm bounded perturbations barely exceeds $70$\%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robustness by developing the first scaling laws for adversarial training. Our scaling laws reveal inefficiencies in prior art and provide actionable feedback to advance the field. For instance, we discovered that SOTA methods diverge notably from compute-optimal setups, using excess compute for their level of robustness. Leveraging a compute-efficient setup, we surpass the prior SOTA with $20$\% ($70$\%) fewer training (inference) FLOPs. We trained various compute-efficient models, with our best achieving $74$\% AutoAttack accuracy ($+3$\% gain). However, our scaling laws also predict robustness slowly grows then plateaus at $90$\%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible. To better understand this predicted limit, we carry out a small-scale human evaluation on the AutoAttack data that fools our top-performing model. Concerningly, we estimate that human performance also plateaus near $90$\%, which we show to be attributable to $\ell_{\infty}$-constrained attacks' generation of invalid images not consistent with their original labels. Having characterized limiting roadblocks, we outline promising paths for future research.
[ "adversarial robustness", "cifar10", "scaling laws", "alignment", "efficiency" ]
https://openreview.net/pdf?id=IpMMl92TJA
i8C0gzB43U
official_review
1,718,511,695,038
IpMMl92TJA
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission42/Reviewer_jbqs" ]
title: Useful scaling laws to evaluate compute efficiency and data quality's effect. The impact is however restricted to a specific problem on a single dataset. summary: This paper proposes scaling laws for the accuracy of image classifiers under adversarial attacks, depending on their number of parameters as well as the size and quality of the dataset used for training. It provides 3 approaches to find the optimal compute efficiency model for a task, which allows the authors to outperform the SOTA on AutoAttack accuracy while reducing the model's number of parameters. They also show that classification robustness to adversarial attack scales logarithmically with FLOPs, restricting the achievable robustness to around 90% accuracy on this task. They argue that it is also the accuracy achieved by a human on the same problem. strengths: - These novel scaling laws take into account data quality, which was not done before. - The paper proposes a model that improves SOTA on AutoAttack classification task. - Different experiments were conducted in various settings, showing consistent results. weaknesses: - In the introduction & related work, the problem of *invalid data* and human robustness is presented as part of the work. However it is not explored in the main paper, only in the appendices. - AutoAttack is the only task evaluated. Is there any other available evaluation for the robustness of your method ? - L.320 : It is not clear to me where the $7822 N D$ FLOPs constraint comes from. - No code provided for reproducibility. confidence: 3 limitations: The proposed scaling laws only apply to image classification under adversarial attacks.\ All experiments were done on CIFAR-10 with data augmentation. No other dataset was tested.
IpMMl92TJA
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
[ "Brian R. Bartoldson", "James Diffenderfer", "Konstantinos Parasyris", "Bhavya Kailkhura" ]
This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$\%, but SOTA robustness to $\ell_{\infty}$-norm bounded perturbations barely exceeds $70$\%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robustness by developing the first scaling laws for adversarial training. Our scaling laws reveal inefficiencies in prior art and provide actionable feedback to advance the field. For instance, we discovered that SOTA methods diverge notably from compute-optimal setups, using excess compute for their level of robustness. Leveraging a compute-efficient setup, we surpass the prior SOTA with $20$\% ($70$\%) fewer training (inference) FLOPs. We trained various compute-efficient models, with our best achieving $74$\% AutoAttack accuracy ($+3$\% gain). However, our scaling laws also predict robustness slowly grows then plateaus at $90$\%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible. To better understand this predicted limit, we carry out a small-scale human evaluation on the AutoAttack data that fools our top-performing model. Concerningly, we estimate that human performance also plateaus near $90$\%, which we show to be attributable to $\ell_{\infty}$-constrained attacks' generation of invalid images not consistent with their original labels. Having characterized limiting roadblocks, we outline promising paths for future research.
[ "adversarial robustness", "cifar10", "scaling laws", "alignment", "efficiency" ]
https://openreview.net/pdf?id=IpMMl92TJA
MsBWHFgpap
meta_review
1,718,636,544,585
IpMMl92TJA
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission42/Area_Chair_NYbh" ]
metareview: The paper received a single review. Upon checking the paper, the AC agrees with the assessment, with the work containing novel components and strong results. Hence, the AC recommends for acceptance. recommendation: Accept (Poster) confidence: 4
IpMMl92TJA
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
[ "Brian R. Bartoldson", "James Diffenderfer", "Konstantinos Parasyris", "Bhavya Kailkhura" ]
This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$\%, but SOTA robustness to $\ell_{\infty}$-norm bounded perturbations barely exceeds $70$\%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robustness by developing the first scaling laws for adversarial training. Our scaling laws reveal inefficiencies in prior art and provide actionable feedback to advance the field. For instance, we discovered that SOTA methods diverge notably from compute-optimal setups, using excess compute for their level of robustness. Leveraging a compute-efficient setup, we surpass the prior SOTA with $20$\% ($70$\%) fewer training (inference) FLOPs. We trained various compute-efficient models, with our best achieving $74$\% AutoAttack accuracy ($+3$\% gain). However, our scaling laws also predict robustness slowly grows then plateaus at $90$\%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible. To better understand this predicted limit, we carry out a small-scale human evaluation on the AutoAttack data that fools our top-performing model. Concerningly, we estimate that human performance also plateaus near $90$\%, which we show to be attributable to $\ell_{\infty}$-constrained attacks' generation of invalid images not consistent with their original labels. Having characterized limiting roadblocks, we outline promising paths for future research.
[ "adversarial robustness", "cifar10", "scaling laws", "alignment", "efficiency" ]
https://openreview.net/pdf?id=IpMMl92TJA
GqziMJPsB9
decision
1,718,651,578,110
IpMMl92TJA
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
GR5LXaglgG
DASH: Warm-Starting Neural Network Training Without Loss of Plasticity Under Stationarity
[ "Baekrok Shin", "Junsoo Oh", "Hanseul Cho", "Chulhee Yun" ]
Warm-starting neural networks by initializing them with previously learned weights is appealing, as practical neural networks are often deployed under a continuous influx of new data. However, it often leads to *loss of plasticity*, where the network loses its ability to learn new information, resulting in worse generalization compared to training from scratch. This occurs even under stationary data distributions, and its underlying mechanism is poorly understood. We develop a framework emulating real-world neural network training and identify noise memorization as the primary cause of plasticity loss when warm-starting on stationary data. Motivated by this, we propose **Direction-Aware SHrinking (DASH)**, a method aiming to mitigate plasticity loss by selectively forgetting memorized noise while preserving learned features. We validate our approach on vision tasks, demonstrating improvements in test accuracy and training efficiency.
[ "incremental learning", "loss of plasticity", "plasticity", "warm-starting", "stationarity", "DASH", "Direction-Aware SHrinking" ]
https://openreview.net/pdf?id=GR5LXaglgG
rOnK5QjPkW
official_review
1,718,329,740,968
GR5LXaglgG
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission13/Reviewer_sYbB" ]
title: A novel method to mitigate plasticity loss in neural networks under stationary data distributions, but lacks sufficient mathematical rigor and reproducibility resources summary: The authors propose Direction-Aware SHrinking (DASH), a method that focuses on mitigating the loss of plasticity when warm-starting neural networks in stationary data distributions. The main contributions include of their work include identification of noise memorization as the paper highlights noise memorization as the primary cause of plasticity loss in stationary settings, which was previously thought to occur mainly in non-stationary distributions. The DASH method selectively forget memorized noise while preserving useful features, aiming to retain plasticity. The approach is validated through extensive experiments on various vision classification tasks, demonstrating improved test accuracy and training efficiency compared to existing methods. Experiments conducted on datasets such as Tiny-ImageNet, CIFAR-10, CIFAR-100, and SVHN using models like ResNet-18, VGG-16, and MLP showed that DASH outperforms traditional warm-starting and other baselines in terms of both accuracy and training efficiency. strengths: * Identifying noise memorization as a cause of plasticity loss in stationary settings is a significant contribution. * The idea of focusing on noise memorization in stationary distributions is interesting and somewhat novel. However, the concept of plasticity loss itself is not new. * DASH effectively addresses the identified problem, showing improvements in both accuracy and training efficiency. * The method is validated across multiple datasets and models, providing robust evidence of its efficacy. * The paper is generally well-written, but some sections, particularly those describing the theoretical framework, could be clearer and more concise. weaknesses: * The theoretical framework lacks sufficient mathematical rigor. The proofs and explanations need to be more robust and comprehensive. * The absence of provided datasets and code hinders the reproducibility and transparency of the research. * The analysis of why DASH works in practical settings could be more detailed. There is a need for a deeper exploration of its limitations and potential drawbacks. * Figures and tables are useful, but some could be better explained. More detailed descriptions of the experimental setup and hyperparameters would improve clarity. * The methodology is sound, but the theoretical framework could be more rigorously detailed. The connection between the empirical observations and the theoretical justifications could be clearer. confidence: 4 limitations: * The paper is well-referenced, building appropriately on existing work. However, it could engage more critically with related literature to better situate its contributions. * The related work section is thorough but could benefit from a more critical comparison of DASH with other methods addressing plasticity loss. * The experiments seem reproducible, but the paper lacks explicit details on dataset availability and code. More transparency in the experimental setup and providing code repositories would enhance reproducibility. * The contributions are valuable, but the paper does not completely revolutionize the understanding of plasticity loss. It provides an incremental improvement with a specific focus on stationary data distributions. suggestions: * Future research should explore extending DASH to non-stationary settings and other types of data distributions. Investigating its performance on more complex and varied datasets would also be valuable. * The paper would benefit from a deeper mathematical exploration of the conditions under which noise memorization occurs and how it interacts with different model architectures.
GR5LXaglgG
DASH: Warm-Starting Neural Network Training Without Loss of Plasticity Under Stationarity
[ "Baekrok Shin", "Junsoo Oh", "Hanseul Cho", "Chulhee Yun" ]
Warm-starting neural networks by initializing them with previously learned weights is appealing, as practical neural networks are often deployed under a continuous influx of new data. However, it often leads to *loss of plasticity*, where the network loses its ability to learn new information, resulting in worse generalization compared to training from scratch. This occurs even under stationary data distributions, and its underlying mechanism is poorly understood. We develop a framework emulating real-world neural network training and identify noise memorization as the primary cause of plasticity loss when warm-starting on stationary data. Motivated by this, we propose **Direction-Aware SHrinking (DASH)**, a method aiming to mitigate plasticity loss by selectively forgetting memorized noise while preserving learned features. We validate our approach on vision tasks, demonstrating improvements in test accuracy and training efficiency.
[ "incremental learning", "loss of plasticity", "plasticity", "warm-starting", "stationarity", "DASH", "Direction-Aware SHrinking" ]
https://openreview.net/pdf?id=GR5LXaglgG
jOZ723NVdg
decision
1,718,650,359,319
GR5LXaglgG
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
GR5LXaglgG
DASH: Warm-Starting Neural Network Training Without Loss of Plasticity Under Stationarity
[ "Baekrok Shin", "Junsoo Oh", "Hanseul Cho", "Chulhee Yun" ]
Warm-starting neural networks by initializing them with previously learned weights is appealing, as practical neural networks are often deployed under a continuous influx of new data. However, it often leads to *loss of plasticity*, where the network loses its ability to learn new information, resulting in worse generalization compared to training from scratch. This occurs even under stationary data distributions, and its underlying mechanism is poorly understood. We develop a framework emulating real-world neural network training and identify noise memorization as the primary cause of plasticity loss when warm-starting on stationary data. Motivated by this, we propose **Direction-Aware SHrinking (DASH)**, a method aiming to mitigate plasticity loss by selectively forgetting memorized noise while preserving learned features. We validate our approach on vision tasks, demonstrating improvements in test accuracy and training efficiency.
[ "incremental learning", "loss of plasticity", "plasticity", "warm-starting", "stationarity", "DASH", "Direction-Aware SHrinking" ]
https://openreview.net/pdf?id=GR5LXaglgG
XHnk3M3Hjr
official_review
1,718,261,710,328
GR5LXaglgG
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission13/Reviewer_mpvC" ]
title: Review of DASH for improving plasticity of NNs summary: The paper discusses the loss of plasticity in the context of in-distribution (stationary) data distributions and proposes a method to mitigate the issue by removing noise from learned features. The method was validated in image classification in an online learning setup and compared to Shrink & Perturb, pre-trained with the same data distribution (warm) and random initialization (cold). Recommendation: Accept strengths: This paper provides a practical solution for online training setup to mitigate noise memorization and its impact on model generalization. weaknesses: - Lines 345-356 (left): Regarding shrinking the weight vector, based on the description, is the assumption that the initial weight had learned important features and not noise? How is this guaranteed? - Lines 375-377 (left): with regards to determining accuracy on previously learned data is maintained, - Why was the number of steps used as a metric for computing training cost instead of FLOPs? - Clarity: - Line 194-left: on the discussion of training time - could use clarification - Line 188: In remark 3.2, how is a fixed number of feature combinations guaranteed for each experiment? - Line 302 right: How is the noise level established $\gamma$? And the threshold of learned features $\tau$? confidence: 4 limitations: The experiments were limited to classification tasks. suggestions: **Experiments**: - It would be interesting to see if DASH preserves its performance under more complex learning objectives e.g. Bayesian learning (SVI/MCMC) or RL (which the authors mention in their discussion). **Cosmetic**: - For more consistency with the existing literature, consider changing the $\mathcal{L}$ to denote the "learned features" as normally it is used to denote the loss function. Similarly, consider changing $\mathcal{N}$ as it is usually used to denote a normal distribution. - The colors within Figure 2 are not distinguishable from each other; please try to change the pattern (circle vs. square or triangle) and different colors to make it more legible. Also, the transparent lines and faint dots are too light in color to be properly interpreted. **Other** - Figure cross referencing is incorrect - for example in line 260 Figure 3.2 is referenced while there's no Figure numbered as 3.2 (it's Figure 2). And in line 306 Figure 4.1 is not existent - it's figure 3. And Figure 4.2 - it's Figure 4.
GR5LXaglgG
DASH: Warm-Starting Neural Network Training Without Loss of Plasticity Under Stationarity
[ "Baekrok Shin", "Junsoo Oh", "Hanseul Cho", "Chulhee Yun" ]
Warm-starting neural networks by initializing them with previously learned weights is appealing, as practical neural networks are often deployed under a continuous influx of new data. However, it often leads to *loss of plasticity*, where the network loses its ability to learn new information, resulting in worse generalization compared to training from scratch. This occurs even under stationary data distributions, and its underlying mechanism is poorly understood. We develop a framework emulating real-world neural network training and identify noise memorization as the primary cause of plasticity loss when warm-starting on stationary data. Motivated by this, we propose **Direction-Aware SHrinking (DASH)**, a method aiming to mitigate plasticity loss by selectively forgetting memorized noise while preserving learned features. We validate our approach on vision tasks, demonstrating improvements in test accuracy and training efficiency.
[ "incremental learning", "loss of plasticity", "plasticity", "warm-starting", "stationarity", "DASH", "Direction-Aware SHrinking" ]
https://openreview.net/pdf?id=GR5LXaglgG
MRbqzXO4Cz
meta_review
1,718,633,651,335
GR5LXaglgG
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission13/Area_Chair_3dAV" ]
metareview: All reviewers champion the acceptance of this manuscript. The AC encourages the authors to take the suggestions of Reviewer sYbB into account and consider experiments on larger models. recommendation: Accept (Poster) confidence: 4
GR5LXaglgG
DASH: Warm-Starting Neural Network Training Without Loss of Plasticity Under Stationarity
[ "Baekrok Shin", "Junsoo Oh", "Hanseul Cho", "Chulhee Yun" ]
Warm-starting neural networks by initializing them with previously learned weights is appealing, as practical neural networks are often deployed under a continuous influx of new data. However, it often leads to *loss of plasticity*, where the network loses its ability to learn new information, resulting in worse generalization compared to training from scratch. This occurs even under stationary data distributions, and its underlying mechanism is poorly understood. We develop a framework emulating real-world neural network training and identify noise memorization as the primary cause of plasticity loss when warm-starting on stationary data. Motivated by this, we propose **Direction-Aware SHrinking (DASH)**, a method aiming to mitigate plasticity loss by selectively forgetting memorized noise while preserving learned features. We validate our approach on vision tasks, demonstrating improvements in test accuracy and training efficiency.
[ "incremental learning", "loss of plasticity", "plasticity", "warm-starting", "stationarity", "DASH", "Direction-Aware SHrinking" ]
https://openreview.net/pdf?id=GR5LXaglgG
1h3CgbhS4y
official_review
1,718,301,456,931
GR5LXaglgG
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission13/Reviewer_dEcw" ]
title: A strong study on plasticity in neural networks summary: The paper provides a thorough take on maintaining plasticity in neural networks with warm restarts. The authors provide extremely thorough theoretical and empirical analyses on this problem. strengths: 1. Extremely thorough experimentation and writing. 2. Strong theoretical and empirical results. 3. Ample ablations and analyses on the problem. weaknesses: In my opinion, there are no substantial or concrete weaknesses of the paper. confidence: 3 suggestions: It would have been better if the paper also studied slightly larger models and a different modality like language. However, the paper is extremely strong without this as well and should be accepted.
FUMiupdGzg
Efficient Adaptive Federated Optimization
[ "Su Hyeong Lee", "Sidharth Sharma", "Manzil Zaheer", "Tian Li" ]
Adaptive optimization plays a pivotal role in federated learning, where simultaneous server- and client-side adaptivity have been shown to be essential for maximizing its performance. However, the scalability of jointly adaptive systems are often constrained by limited resources in communication and memory. In this paper, we introduce a class of efficient adaptive algorithms, named FedAda$^2$, designed specifically for large-scale, cross-device federated environments. FedAda$^2$ optimizes communication efficiency by avoiding the transfer of preconditioners between the server and clients, while simultaneously utilizing memory-efficient adaptive optimizers on the client-side to reduce extra on-device memory cost. Theoretically, we demonstrate that FedAda$^2$ achieves the same convergence rates for general, non-convex objectives as its more resource-intensive counterparts that naively integrate joint adaptivity. Empirically, we showcase the benefits of joint adaptivity and the effectiveness of FedAda$^2$ on several image datasets.
[ "Federated Learning", "Optimization", "Adaptivity" ]
https://openreview.net/pdf?id=FUMiupdGzg
yH7DISRIka
official_review
1,718,317,580,986
FUMiupdGzg
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission29/Reviewer_YaJd" ]
title: Adaptive client optimizer summary: This paper proposes a class of adaptive distributed learning algorithms to mitigate communication and memory restrictions. It introduces a strategy that allows clients to initialize local preconditioners and adopt a memory-efficient optimizer that factorizes gradient statistics for dimension reduction. The authors prove that their approach achieves similar convergence to other server-side adaptive FL algorithms in non-convex settings. strengths: This paper proposes an interesting approach for adaptive client optimization, which is not widely addressed in the FL community. The authors provide extensive discussion on the topic, present the motivation for the problem, and offer technical analysis. Numerical experiments are also provided to support the methods. weaknesses: I find the paper very hard to follow and lacking in narrative. The motivation for using adaptive clients is not clearly explained. Even though a client with heavy-tailed gradients can potentially harm the training process, it is unclear if this phenomenon is guaranteed in the domain of Byzantine machine learning. Additionally, I think some existing gradient-based client selection methods can also address this problem, so it is not clear why adaptive clients are the preferred approach. confidence: 2
FUMiupdGzg
Efficient Adaptive Federated Optimization
[ "Su Hyeong Lee", "Sidharth Sharma", "Manzil Zaheer", "Tian Li" ]
Adaptive optimization plays a pivotal role in federated learning, where simultaneous server- and client-side adaptivity have been shown to be essential for maximizing its performance. However, the scalability of jointly adaptive systems are often constrained by limited resources in communication and memory. In this paper, we introduce a class of efficient adaptive algorithms, named FedAda$^2$, designed specifically for large-scale, cross-device federated environments. FedAda$^2$ optimizes communication efficiency by avoiding the transfer of preconditioners between the server and clients, while simultaneously utilizing memory-efficient adaptive optimizers on the client-side to reduce extra on-device memory cost. Theoretically, we demonstrate that FedAda$^2$ achieves the same convergence rates for general, non-convex objectives as its more resource-intensive counterparts that naively integrate joint adaptivity. Empirically, we showcase the benefits of joint adaptivity and the effectiveness of FedAda$^2$ on several image datasets.
[ "Federated Learning", "Optimization", "Adaptivity" ]
https://openreview.net/pdf?id=FUMiupdGzg
KDErR7e0r1
decision
1,718,722,173,918
FUMiupdGzg
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Program_Chairs" ]
decision: Accept (Poster) comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together! title: Paper Decision
FUMiupdGzg
Efficient Adaptive Federated Optimization
[ "Su Hyeong Lee", "Sidharth Sharma", "Manzil Zaheer", "Tian Li" ]
Adaptive optimization plays a pivotal role in federated learning, where simultaneous server- and client-side adaptivity have been shown to be essential for maximizing its performance. However, the scalability of jointly adaptive systems are often constrained by limited resources in communication and memory. In this paper, we introduce a class of efficient adaptive algorithms, named FedAda$^2$, designed specifically for large-scale, cross-device federated environments. FedAda$^2$ optimizes communication efficiency by avoiding the transfer of preconditioners between the server and clients, while simultaneously utilizing memory-efficient adaptive optimizers on the client-side to reduce extra on-device memory cost. Theoretically, we demonstrate that FedAda$^2$ achieves the same convergence rates for general, non-convex objectives as its more resource-intensive counterparts that naively integrate joint adaptivity. Empirically, we showcase the benefits of joint adaptivity and the effectiveness of FedAda$^2$ on several image datasets.
[ "Federated Learning", "Optimization", "Adaptivity" ]
https://openreview.net/pdf?id=FUMiupdGzg
DlCpvCzo0v
official_review
1,718,230,751,852
FUMiupdGzg
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission29/Reviewer_TQun" ]
title: Review for paper "Efficient Adaptive Federated Optimization" summary: This paper proposes $FedAda^2$, an efficient adaptive federated optimiser that leverages constant preconditioning and factorised gradient statistics to achieve low-bandwidth, low-memory adaptive optimisation. Theoretical convergence and empirical evaluation on transformer support the author's claims on negligible degradation of accuracy. strengths: * Strong related work section * Valuable contribution in a central problem in FL, and particularly applicable for attention-based networks * Inclusion of both theoretical and empirical results to support claims. * I liked the heterogeneous optimiser setup of client and weaknesses: * Missing quantification of memory and bandwidth gains for the experiments run * Only one modality and type of network has been evaluated * Analysis is applied on full-gradient descent confidence: 3 limitations: * There is no mention of how compatible $FedAda^2$ is with local DP-noise. * How amenable to attacks from malicious actors does $FedAda^2$ make the federated optimisation? Is the only counter-measure applied through gradient-clipping? * Is the current scheme applicable on asynchronous federated learning aggregation? * How does the current optimiser behave in low-resource settings where each client only has a budget for very few local steps? * The authors motivate their method being tailored for cross-device federated learning, but ultimately it is not clear if they have evaluated their claims under partial client participation. suggestions: * The evaluation lacks analysis on the memory and bandwidth gains of $FedAda^2$, compared to previous adaptive optimisers. I would sugges that the authors provide these numbers for completeness. * I would urge the authors to include some discussion on the privacy and robustness of their algorithm.
DOUskwCqg5
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
[ "Vijay Lingam", "Atula Tejaswi Neerkaje", "Aditya Vavre", "Aneesh Shetty", "Gautham Krishna Gudur", "Joydeep Ghosh", "Eunsol Choi", "Alex Dimakis", "Aleksandar Bojchevski", "sujay sanghavi" ]
Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights \(\mathbf{W}\) and inject learnable matrices \(\mathbf{\Delta W}\). These \(\mathbf{\Delta W}\) matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters. We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on \(\mathbf{\Delta W}\) depends on the specific weight matrix \(\mathbf{W}\). Specifically, SVFT updates \(\mathbf{W}\) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to \textbf{96\%} of full fine-tuning performance while training only \textbf{0.006 to 0.25\%} of parameters, outperforming existing methods that only recover up to \textbf{85\%} performance using \textbf{0.03 to 0.8\%} of the trainable parameter budget.
[ "Parameter Efficient Fine-Tuning", "Large Language Models" ]
https://openreview.net/pdf?id=DOUskwCqg5
fDTE5bDwim
official_review
1,718,292,682,838
DOUskwCqg5
[ "everyone" ]
[ "ICML.cc/2024/Workshop/WANT/Submission9/Reviewer_Sr29" ]
title: Novel idea for parameter efficient fine-tuning of Transformer models summary: This paper proposes a method to reduce the number of trainable parameters during fine-tuning, while improving the generalization. The proposed approach computes an SVD of the pre-trained weight, and initializes the adapter as reconstruction with SVD Eigen-vectors and Eigen-values. However, instead of learning all components, they only learn the correction to Eigen-values. I have seen this approach applied earlier in metric learning literature [1], but was nice to see it being revisited in this context. The experiments are quite exhaustive demonstrating the efficacy of their proposed approach. strengths: - Method is clearly well motivated, and different variants for learning the Eigen values are presented. They also revisit related work and show how those solutions can be expressed as a special case of their generic formulation. - Really appreciate the ablations in Figure 4 (comparing various parameterizations) and study in section 5.5 analyzing the quality of pre-trained weights. - Thorough evaluation on various tasks in NLP domain and CV domain. weaknesses: - The only limitation I can think of is the extra memory overhead compared to LORA. This line of research was started to allow people fine-tune large GPT models on GPUs with limited memory. Authors have also noted this weakness in their paper. So, while the theoretical and empirical results are good, it would be great (potential future work) to mitigate this issue. confidence: 5 limitations: None